var/home/core/zuul-output/0000755000175000017500000000000015137101422014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137112037015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000332161515137111661020263 0ustar corecore|ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD -/gm.&]low|_v-VWYw?.y7O}zi>^|1Fr_?c^*߶E٬:rv筼ح_y~̎+\/_p/Bj^ֻ]Eo^O/(_/V?,<']_kmN:`SdQQ.aLk~g\UlxDJfw6xi1U2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺b}0w8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZ-D\h]Q!]Z8HGU=y&|'oZƧe7ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀svRڡc0SAA\c}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|@Kfځ9g ? j럚Sř>]uw`C}-{C):fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦS7PKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){ߘ>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/pCTSqQyN,QEFKBmw&X(q8e&щu##Ct9Btka7v Ө⸇N~AE6xd~?D ^`wC4na~Uc)(l fJw>]cNdusmUSTYh>Eeք DKiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$Swwu,4iL%8nFВFL2#h5+C:D6A@5D!p=T,ښVcX㯡`2\fIԖ{[R:+I:6&&{Ldrǒ*!;[tʡP=_RFZx[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9w#z A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%x/QOݸ}S^vv^2M!.xR0I(P 'fΑQ)ۢWP Pe>F=>l |fͨ3|'_iMcĚIdo阊;md^6%rd9#_v2:Y`&US tDkQ;>" ء:9_))wF|;~(XA PLjy*#etĨB$"xㄡʪMc~)j 1駭~բ>XiN .U轋RQ'Vt3,F3,#Y3,kJ3,LhVnKauomˠ_>2h-/ ђ(9Uq EmFjq1jX]DןR24d XrH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{Ba>EE衢^}p/:F?}bi0>Oh%\x(bdF"F 'u Qx`j#(g6zƯRo(lџŤnE7^k(|(4s\9#.\r= (mO(f=rWmd'rDZ~;o\mkmB`s ~7!GdјCyEߖs|n|zu0VhI|/{}BC6q>HĜ]Xgy G[Ŷ.|37xo=N4wjDH>:&EOΆ<䧊1v@b&툒f!yO){~%gq~.LK78F#E01g.u7^Ew_lv۠M0}qk:Lx%` urJp)>I(>z`{|puB"8#YkrZ .`h(eek[?̱ՒOOc&!dVzMEHH*V"MC Qؽ1Omsz/v0vȌJBIG,CNˆ-L{L #cNqgVR2r뭲⭊ڰ08uirP qNUӛ<|߈$m뫷dùB Z^-_dsz=F8jH˽&DUh+9k̈́W^̤F˖.kL5̻wS"!5<@&] WE\wMc%={_bD&k 5:lb69OBCC*Fn) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1MGPg%㢒\QS܁vn` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\PM `iĸ&^Ut (6{\٢K 5XGU/m >6JXa5FA@ q}4BooRe&#c5t'B6Ni/~?aX9QR5'%9hb,dsPn2Y??N M<0YaXJ)?ѧ| ;&kEYhjo?BOy)O˧?GϧmI C6HJ{jc kkA ~u?u7<?gd iAe1YB siҷ,vm}S|z(N%Wг5=08`S*՟݃*־%NǸ*kb05 V8[l?W]^@G:{N-i bɵFWǙ*+Ss*iނL8G9ٻ֦6m2Яݏ2k'E$9uLc7k=(I8# $T ]PG{(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00?օYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P_fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JX/z);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>y=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9p忹M&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyv>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&Oo7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&nroK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q 16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnv B @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'hk $L~ԥ qކ );B̗ߐu&8c`td 1xh˂U4U/ӋQ`IRҴ225UY5li_ r9v!Go3 "ӎkk8L%H㸡]V.;lM>*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-$|O!#OY:-$`3iԊjRQN&(&1NqWlaR>^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳o I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.M~X+EǬ.ťqpNZܗÅxjsD|[,?_4EqgMƒK6f~oFXJRF>i XʽAQGwG% C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$?D)~?wy,u'u()!d}uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|G)Q`rkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% _K~,#/0'NZ׽Kq^ėSJϼ6#j8GO[ PCbʍN^XS&}E9OZC't$=tnn&nu [}Ab4{W4*@`tF)Ċ+@@t޹na4p9/B@Dvܫs;/f֚Znϻ-MHVuV_K2k*`cKxuBG&24T}Lai 0Va(7K#ӊ!,ZDxFQO*lם>!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d->'I`ػ޶dWP$ę'> $y /=T,9vZV1b&D*V׵K& Q4qxɎdYc)JYUIe:^O9cUB#B2+ ޑeQƕޕ66AcUJ&!\,v RbԢ7Tqvܞ$~Cn0B̏V TZGe0VXFM#ψB;C;M]\s!>aa礯iaK=:8eC1#44[w=\蚰!ط(H~wSQ̖Lӵցx~SȐs|x18MnڗyiK_ޥeO-pkͣ: zԂ«G,(hs$7*frrc9?B?i2y<_[wd0޾1>Y&tӱ)DSkT?8[H-#d1[Q-gsJD4*sx_/oԉs,ZQ=7hgr, weMf;'48P@fAz]zqŴʃÁe퀡,㲆գ֊2.J双}fDq\׵lɌ0}2|x nZyh0\6 }DgF͓֨I7Ca%O&r#]&*<̏dV](xDe>@{X祼㰶<_1hښ5S,,y"&,)Q?)(V9yeŤ^I_\@aMBu1~Te:rz9aTZTWլ}J,qKrWH%8EsQR)i?CI{yz^yyu}WXry~}|" X7⛳&Ƈ^Ѡx{CxQ,d)ߔ1v GG1/E* o/LA2}ӨmzyJi,sZEIRmֳVa0G ޕX㨬 %~tVFq/ <3pWyywweq{ʕNKMNMyO[f}gH]= J< iY2%~c1we"U!x"W.0Q2Fm$CSт>Nc[O 86'JiaM5q]sy{|!]z*U-o:(Gk'S[fX}޸j^u ɆJkUsuc'xOs9Dܷ5 q(_}=|`2ȭ*d1 lj*ŖK (HJwp,` WB&wEʃ.x KݕbxR{^_[K" zD)kOX nJեȪ{\]{;<{4ʖ!˛ր0ȫo17Aۣɦyr-A-0N ,sD51ĺX]BCْJM%jRgLwiQ)?52Øڤ OA L[M%,0Q" hj,D\ѾZWA-&ùQW%ˋv?g]0gЈ|7xu[&si-[E0)XLWx[NȂH2]~}L}ګJ-ooтu^x޾3]e,;@[V/`.צ ']RP }'J4XkjL9V0V'xrM'ӵa]Yu7 23].%j >TYϭ0ifD{ZD* v{Bq}U N3CjoLЪ 5p<2UFIS-6c b_oRF8EH-nA0RM=<ϪU<8Uݻa EUO56?ol+ݻ/ֶ3j,KWº"E[p"$}fTdq$Zkʄh,UzI sEf4kKb6ΖXhЈ;0 gIq5m5:'y`9|/K B!Ӽg3fp_SK\Lg!rm&Vb ASSS!,فh{|E'rzxc_K>'yu"ȉ2T5F[hU{Mhk2'IOŘMZnSmThPyG#b0CScxOmukhZ\rdS]X.U-i)(F|bviyf$a9߲ȣLL!huw!5Ve{oVS[oX[A*! [U_scjV?Pզ[놱L+HKeUkрxl7 mTțza% OU`1T,cLUp$;Ge"W0ުf|78*yAz4ª=¸7?_iT{ QU9BEK#m'"A,'Qwqc.O3=pSaq.i5cpYwW d3ilWDu[5*E幈~0}\W73 lHB[~ô j=lt]զ:St]ߖXiU,sHºTU)2+;+y;TU9ryo?Q\Ua1W4F rr.KȫJ,!$QT]:Ls,$g)a5V["6o'lmg1qݶ}~ G.;N1ECu@caw!/m8nsxpI47{ LƠNtiJr}׺eS0vT7v:?NI0{gq547)l)$CnʠTnf<[y`F`܃^-(aǿs0lrC7-W?{ #̸Ea60tথrF 6܈7q7sf:o05g;a1f7rfwYV՗9Qq}G~qX23w_T;z4r3{[Gn46!/x Cκf})c{qc"6C6v" tA(۪yC?;&C;LD&qcm.4:}}V5)iN!=q:œĬ-]l Х>-&MO:\w'^E8tr*v_c^ I(yϳNl_Ź \Ec9G:B&=|;k\N,ۧ*k?B_RgL jR_NTVU?{ '76ΛݒidXcb6]N۟~˛DjNi6zp$\k]>w9*Y4eTfSYE+aΪ^^HU4Wp tJoi.ov3b0v E¥YIVٔLD9AAȽP&Zݠ}-HaQA^^U֛$༞`dG#,1L`y 2"PDg5/(gu9R^R,\+uxob^9]w&1I۲,lV{N%ZsQ1[vwKN`6 "S$G$'&7X) ,K-.~u9M`vmTՑϹ7ޑ{B)]ʯXIX"dJܰG ,5BF/"ԻT㻟aP<6uD@'䊒R1Ѿ`9dMspeCY%-S&I:(=rd-⪵`/=85A ݋_)+&wx''ggoJG#8Ʊ0?2{ƾ8L1/ivտJc^2v:،ڎJ; V` qKxj p||E)N|8jq}15ߋ2r8O(|Xɝl<Vd;?BՒba rr\ _-RxS]Å0U} 6ZQ[w Cµբ=(uaAJsv[v68j ڂ(5vŸaWD_ mal?~*Lprp|k6IM?d 8 򶂅bT_Kr{)8ɥc[@:l\cJ`{qB;ċ͓oq|%=: 軺 fW5R XQ_dxʖ r(.2ZTh0|HAH]e r*4vIpnD tE"|&¡/Xn, ;``\0^⦮6Ϡ4o2<PcO~Twf.2. t FBE!\̟㪾xPJSbk¹BK*E]w>PEb.c[O?}P(wYU%B[vd0P7 bvou+1MNoh.'ftB]SW#3NEBHZI = +e?qϳ*Kqб кTKћ4MxB+epZQM`5 T,Q*]"V5>˴0=@)zĖ:R ~ϒĠPhI"*"xФ~n[8WV@TշW$'l {̽T5~rlNҘ7 {.af@\8xw*r0 {WUoїj{dZY]NRn/p# ETbƣ[ RI!S73)hxR+9^L:{UլDi3ZV%{P]H{'Fv=YL-((9j&rܢ($G @~b>2%XlTQqof*8;&(D ؇z$vHRB4i@k +w+*dvTBw0&!rzîWB{isuSӔw{]שSsW\K4swZ.=nwruGc0xd t vZJjs{Zf_謚:s ?-`ObH3y~mE1g oNi9ZЕ o[Ϗ2g5Eyc7̀yA{ƾ7 ̦"-(eQliyAҌ/cHaeFbUF{A0ƓWI-\eqmfPƨ!5b 6YYjoQ3javl1 wYT419`"cik-cE;޳E=h*?W6Pqn Q$3\_f06!:7`WMGV҇~zC~_N6e<ˤ+wDx`|QyN6IelE:8*{Z76Ry?Pko6H ׀˜ZP&*(T]yϚOhH+mh3ס#S4Ga7z9oKXI,Hו+ //\y9F)t>Bo!UfX/Um"tE}8>Zfy*7rW!84zκKC n.IO.r#L.GfjyZ*W\A^/d 7%Es݃HqG5F^1Xߛd1=l|l|U)v.x=Z?ѲL&>4q0GQovQbReOH49Yd%SIYpe}e!WUl$GVU<ړ\2|˾ddt*yĤFX#cT\+-RLJ6FFtެ>>\NOm@|' W>H8 Mf̥*fu -91FڥE!ОJq#aǴ_IʺTY5LZqtNs?H?̖fKvMb\%@iG!wd4/Kid6>)(kjRebele @q|zgyؑ\2~ y-SE=Lq $k;Y{si@HQwT)W_AYN]T瓌?|O3E1j687nXoG0}|nj[Wј>>~Fw/qwS?Vc˚\1o~vHEڟ~@|VS|6<5R2՟')'Pv_LJLJ9NO3LKK||fAkd ‹Nj B_>i| }43cW:hcRϾ+-۾|xäsfZPY#ca=OKKfm]o7ClQ>]h]h .sKKN.Q/4:^)"no?vZ-eU 8c[@K"'zeE!-6 MDӆϨ= { (U@)! 1*VrA"vk`~#S>I3<)x^bx^DjJ&g v$hcSFr"2.Ȝ T)ǻ()pv ۷Z$HFŒk@5B.0J=F"YHjP"gp149KkA} Ʒ$8BxLk>ƸFfxG<=6LN~p#D>@"H-QZ:HPɠA*95/x)іMv肠{g١K%ք OLH*@<k6Hۜ9\`!n´kzwUV;PX-7ʈO#+&pi mS@肝8*3%Cp-.ehǂKf)Ns6?]p4vQxI&z?q#Yϻ8a[U8BE\U`/gȬH:\j!Ѕӹ6o *A*JF5gHt#F3ϵ MQ(MmSA `އPpH:1^o88)>p*sɝ^?<A7߸?ٱ ]yKR(tTȈцaV E҉b|RZtэ_U 2-MQsSTr!Tn'Ht5:w]pb:pxvJ0RZ V!H:9Vw, C"b[''B@ -)Q B Q=;18 Άj^k:'[5sے:j=ɗUJO;:fSmbrk)}|N`e^aldKD +]ph[ITSb}=П~NlξuWe 9'uQ<#Ud`6+ |u%UD"8~X8HnFe/hAZ$7<%cRAүK.8Z(bnp,568_}ZO>mľj=86~̒8dvPE ~~~삣U 5rDjdFЀf!YlBԿ\X ձ[cYK%oMH xIxxxb6 鴉;>=wq5kQ-{ -V!pHzc{|Ť㻔&wf_+#uPr'隔rvQ9ÙCkhXlRe9udO\rIЖG'M&:6@&Ói47ҠC W ZӁC WbuP1)mqbvjp7>$04zp^F_+C)NK|/&zdærG 𸠞!M!S(bzyK5^rtӤ,Y. J!6ύJVy.$y- i- ytsFؠ]`mj1dT IM.8:>ٴ=Xe9sp_Z y)eփum% HR#%vS$ޤ.8&8w|vՐfg6"R!UP>RA K gAEF=V έ S2 rρ:s7K~4$Bfo<nָErYԇhȽN_w'3ZEVgAJz7sw.8v#$G<<,Z{. F[g5y 'Qn{h~6i_}8mTCF+f (UZo0sߺJNsDtZ/w v TeehkA+Z~PRf)zI'bcRU$^tKJ1 CRtMUHS$q0r߮ha.uwmp`e*1D<PLd3dn&=e6&Jm>?\%10nozYwQ]==+ѧs+opqozT莝tbPcmL[C`۠B R[RGg{ ~HJaǎ֫DP$1vEq$ulx}q0ABgVa/PpFZ;FH:q90sGlH>q*ȴ7= c72B&NZDO"uU|ȹ5ԊF9kR[Ncz}GuđuRsJLçz֋WrgkN?׀],Jc'=|ݲReϊL)(y@XooK]p0rg.5|Zo'4=PK2`BaT9֯]\BdJYbOF/ hƔr0hZ@Y|缍U\KYO]p4nQa`Jt2!/a^Mib^-_V[ڒz0z0<{R/y+]W=N GhofНUj#/v>'EsЂ׳F&[~LpvnHR~\e S{٫0zPsU.T6Eܡ"N)F Yo%*h\Aү8eG/AV(ysfA YIor t'8m J C2-*A!!y%flӬxyC@N} rQFI.f!C tcH_0)٬Aҩr E"Tu x;6(6P"b<-%M{uodWxz-lcdJz"b[2r& Eҩ n:vEpΗh8y=M֞ڑRx~ 6\Re>=~a.yҧ琣bz0+Fly 1m]pj&zH:UY,U񐿥)>b-ʮ{ r^^Rm7{uufYg YGs nL1KK%8%9A_ػFndW67|?A0;فN`%[w[Vˏ֫m{,ubH4س|=+t?Brl8:?"R0 pVbqB૞.l翦#gDn8gfUJyF()5UNՌ[[)VpwY0QvE`¢sOg+xRܩxLȽU1ۋv<1"p\wM_,mH˹,%ԉ+ޫ퐌x6-5F$dì]wf"n?ǎ' tNq3uZ5WzI{Ď}Q8=&ӍI_| V04LeH◠|FIVθ\3{u|Au㎠GĔWrp\%aʟ=W=r{)>LC 'ףغGgYyoLsN\w6*8Ǝ!Bc/ rٛp4ow&79wgNi% gWK(]+(pJqxޯY=R3cxk. ^aҖaiт<`sk?^yz$?I'?wB?69я-f0 .">aOi k4; D pZPդyS2@$ hd$'|S7Sw5I?3~ /$ZOr| &p@)$qB1NJcѵB o/hX|{D b@e~^v㒷y>k-Yث] ۆc {0p/hNpUh=c0a/8EeUeyJw=Y?K+k'>0',5 0Yr>^tzՃs%#0s˽qIZB( _̈́#̈` mCYM\ܸB.Пy5kk@Ċb n¼]|xo]u_ݸ]/΂5ZlJ+:#0{FqAp7%ʹĬ#Nfru;dAԊwRQ\|Jf "󅟁-/ܻʳ kGO1:|]]ݴo~o_N5jGk]v5y}HX's}mj?)bku']{#e<aD&ߟށ|٠ PU&ꕍYhTͣ<5? #(NmSN;V*:Y1Y#?~J`~K#CK.16H j6(:@amc|JDM0BAڳh>[HwB]eH$ ni:WJ.L􊋆mZBI_ºSO!ޥi]OsN!\7♫[B$o bR||$1ʎX'ώ@bl/*/:pTV{Rs]LM'RdOfpc]z!L4lŽeFZ)4c,ܼ]8膨57|Mh0fpfÏ_ٟrW^u5*&G1+`DiZJapʬVJ;~SR,,^,Vb6jwh2JVCzuN@;gtDa[A+ycՍXV=IﮓfY{: -u"FrX0ѣbg%GEJvQ;$t=3/9 {:ፙV1sxF3uٚC H^yFriR[Qñ^{Ykvb=X(RD@1YĥsA,o+xW*L;]9Rm|:q̆y |v)JJ_.^pw\ 2oOe7 -^kٶ.x3Cq*9"Bm5)n$̵fHYٷf,?AxMSx9RS eX#:5;m}K @dJbLc={^W:,+(?YCSplSU-UZ ׀z-`>#`t :εZÿR $;uPXv粂(nw iVc_lT66?Mqx30E%-Jt>J^cc9f5aRܹ\w%?n]J"cD(mfb<^fez7;iYIF,|xy1OS*bRХ `AJ{d5X3,@Ջ95WU*!Ɋ "9",AiK~ͷI>sS7nS~ G1`n~zތ{KQ4R7Ch4N>k[Z1,V=0o{1OM^?Tyb͙mpP Tk/.lIh4.BSz=u@.dJ+ 5x@~qQue!Uklr~%+r}`@[zG35|<UlrEZ2Jsr?n=wzk{Ǣ=朋C."w~bՁ) 9OTB`I{S'뉫|Z҂n1L:(r)S1XEk4O%Ԅyœ :i+\M:FVݗ.c>DC$|d<0C0S[hO5߈.WЦ*.؅R+**#Ǎ ^nxׇ9ww@VG0aqƈjŸ[ལ2X Ac->u ֎KG Vi#eC@)\{f=1%~n.a8G b"ʈ7VJyuY$q%SRhkJ=;@Φ t[$5KS Z10rd1!&h^2Cwy=))IR* 'A*A*7'Uhu y?C?TFb|6'W#Z.A5 JBaB0f0Riub*#㱓7ԪwL`ϲ&6&6A ֑V!6mK3F*Z!NeP['`[+SNq`[:Yq 0L qB\t9eWZچWfBiA;E* SZПd3ҬE#N  CWbD5(AYQqfB1hRb=1!БԹEn-.1dwռ3foiB"Lb,q^i p oJFA^k8Ð a> J"]$(!P<=LW;MHwNs_i eSQga+YR|ozIW4Qʛf"N iQ}b{T<ݓ]/=jW1{͎Z 2I\v }S‘09x4;J8c5L6^[F1)>0WT@I( J K$-) u,VSM%Jޛ$WTraR(n 7c|@H$W:"(-o dȠS9$)Ǡ Q!S >pǕ`!{aޥ^SUK վ)Sa>]O`zry${p FSf3@0@:(M 8nWsAqI=A.E?\`$vݵ-DznH/zײ% IVfH!9R GDҧ)U"o"JM>fr8R(I|l'i(ؿS:5^ƒ&!6TakE1`:ƚ-L(JU$MNP8JrlԐ|\tVƉ4!\O4 l3g*IRjLSQmH>;}U]"#0g)FQRjC2 8 w>Rd$SvvȐpuJCQ3քV.`]")Z4i: x{BK{1vR06`f eOK pA )S.H mݾMD(F3)ÒH넦`X/,wI"Uz\i 06G?0=wѼ2OvUh#vDH{KLj"GD^S F`)jD"y"V;t*7ʒxBԑWqERQJ.Lܹ~wŬ@h7 <@kTJHW.%o uz 5 KvhԶx0B)I/_2k|J 5U+Њ=.\)(H-1D0zR7$Ib]ѐP\*Nlœ߅6{  )nPMWq鐴q}ZfQE4$!v@t,"-NhD] 0Iڿ kLϞT\uB aeðBX9/Nt-FX{~)㶅/j~.)|·/k%l4+fAtjOظ2[̧2`>1z$%!W,Ʉ/ '7Oe#oܪ* Ypg0K(K5X n^ X2 >Zpq%ifSSz `)A*w-}2-`Ԟ @M0.8AM@-Jj{!yگMe7,Uf/^}%XK45,w쾐|Yn2p?Z߸`bzFPܞׁїɊ& .P \szOh5GNHH`F"N=6`]=x|tH{֔˻y,F VsJi5=K# ?m·LG5EZia,JIk0K$X%(?~LhRi=vW }|Voܭ=QU_!TivsĞޚT_+a (f)Wroq+6nW\~a,̗oE ACWC08h{Wo`ޥ; OV.U9 ץ/0ޝf0Id6i:zpzuM]͓j ;x5XjIb3Vbn50@@r [Ě"O̓0%Ѧָܸ܂I.5?T&AaGƣ؏YJvsxty=>X=C/nGAShV!z "U̞WKJ ,,V %b;=pނ[1X*z’(1m%ߣ=Q(?e;JgůJ-yFTOs eWE fM bFqhI/W; *By|3[؃`ySOϫ9o4y`O`H(GFUQړs{I|s&.?.&Ho Y6޿;Ч~^x[+T.&p)EJ:J%Hy0uPG˴ `0hnYY6~E.UzQRnx)ƙ1#˗InHWUgDiƴXiI%f5$￝ 1XTQI՜ylWn f_޽wdÍx2\.ҧ>[?0WTʓm$'lR[KB"Ϧ(\y%,wpT/ $QeՃo- s<(v9&:@ *vPEnZe >Vhd{ yce/8x!Nj,0VQϗ& WV40^胍0ޫ|F=ģ/_~(4܇l^Y I/OMj_٢@"q{I=TtufҙejmDKGQ2BIbN.N6[JsE>"o zey) 9Q(UuIj]LD%N !`!>2 FPK~"`6Q}U V2HGCOE0=B(,4b_e:@: zMM00̞ϺMTMT>ք$2 KPZ#&cҌ/+ ގėDq`ĬReoy/[Z(o/XqV\U--_6  I%¶dXH!Nۯ6od + X/nCoH? > 5kyI!߼^<7BzM$J)8|e6@2j]=$6? 7 5ߘJͣ1n>d>\m[^xx[PudXKwAqp&xpM)8'B)7GͰ*U6+dm8Zelѧ\Gn?( ᧟G㜝teL_ d6rʊ_͢E!-p4͘:˖SclUF0vĭjm4 UFĚӲys̘rK؜intY{*ܭxenq޵qc"06 l]a'AÎbԖvKMKITK.'B`ŖN_\^LsnV y7o4_܍]AfP 0^j`8eT2/‰gy~KSfXZ~M?\%{=+||S;e+j3.a׏~TVMbzTaS߲CZ^7} _Jl~<<2eݴ_VrMyitgi&IF2vIP#/-$>)o5,+9)Y&~3#5<'T"1^PfRC鈳ɝb 64):ۦ;镥դ^M:R΃;iЕT0qmY|=Lv}ngwW׍ˢd.ybOϻ)qJ)n<0=Lۯ?[}\RUdSڍ[nZre6Wْ/FMz\Oj-7鋔0و*h6E!Pk~:)#w9}|;^?zb~7}(ŋNP_]~D*_vS/DTyny%g5 e[T *%N0lN߅C.'t}e݅κԺB/|si_MĭB,i^!ĕH{BY- #2xYNtB}߹|႞`CG/V7wX/4URͷ{:WúuҞϹ->P 3qJ~wLbHo3sM-ۍ0 ʃ@u5F^ӠЮ6&#DRΟ?s?}#sHw"b(DG8@t"ъ. goJtf./v+d)ی)4%~>/,٢~bon~mzV0^pг;ocɧsH4פOsa+Kث}ܲϮn Һ\"g0!* FP&~8"ǼIulN̋sQ3 g܊)v(eU cܰ3c^xuؒCswsl;i+=ԩscss=gmϱƥ9]m?TUnHu{\T&EA!e̘/́> >2g yMJ{RV\u104.?^XYbRq?gF,J:uUzU!תbƓy=@´=F@tp! kr_px?P 3toG_wvcJ[Ƶzy[L6|V7Ņ!zWKnrZ\E?QNrEWEl{nnfӲW*̃|) xbڣj)Ĩݬ]zlwq=,}(E*P40ZK`0:hU`W9, b#$g$%\Z;eX1%"?ݵόn'ELl-$sA08 RE@WɨbFAkl΃-dOEސT =,!otw/7hE)p #=(ʗv tQCãyV4+{!mF:rdIq^ o#ԜZ摟 P㸃Lrl8Aؘ8~A^G$rL3tQCãy+7z/o':nͭWu}2muF {oUm( q(0B05sODW۫zڱ}Ț=uw`Qj;A <b#r1lN0@K a°-U]md 9C:sq5I=@@*FJCKh M|Ȼ"Vb~=4aQ`ڢ$GPI% T뺍?G?DIvKS0mfwls>h_?<,XE2AJa$p`@G <ڐg y$e>>8cr:E+jhI"6M2G{"?rѪs%Mv2QDWl-ZuE[Υ>@MtUM(kbjs#?l 3?ߦ@ߠҬ "cj9Fp VbĐ `^| ȂzACMzv+ߌCyu\{?1nFI&Q=uo1uRٯ_s8搟 4чߎPs:pXE%XH~!og.Իuane?z[ ,jhz>z5n6wiӮq?zTrE^;UH|S>;Ɛ¼XTݓ*KL˸>"˳ rո2x]Y䧾#.ynLMPmII(f pՊ559O#?[D1'e Y]nӀTέ NojM-7^Fhэ'zBvѹ(G!9ܛE # MGMmd 9iuW$Hp*93`Tc2mMj ho#H:Nxm'g8;+PL~ R)"ģhDhihc$vZX(-iCc)!y\4!Q[͐a[x jJuA?WYha!Z7c"5f|mmnYq;k7~yK6mR@*ƀH,م&ȑ嵺zV$L,ΓUۗ 2YQ&mNrDᥫlb 2@4*8cNWyƚE~"̴ǂ8cȺrE#p O3AT\DD@H\;mݜO8#6/0djb8;α:&{\GxQn-.&UkRve*s}P1 fwqΓ4,S҅+J m\)XdiFi\'qpΣm'TtSGw]&#jjg0 )zl,Ի ΍KA2Yc>/Xif>}E;pd\,.y4Fq7eZux'+TxZ)%lW2~s+5+O?FG ›E|%se$.@oπ\(6v>˲ԓ"fRd5z9x7Hmٺ,_ը[|9r?{O8_ y] ,p;0 nio3tb$[Rbˡ$5m,"dțUԕ~4*],V7p+q{D^#^n!97@B%#}m h CcW#6uxP܆;vܜ B_V$#zh7,gn.nИ]Y`9s:IEb8I_VLji L&ԙ:v•߉<~-SIx_T|S>1Ǩ_Lt>CxƝED(m ?7d=4]e9s@*f9☈igs)!hC;#%b^:})Z{1|~-WGHZ,Nɵ8MLIBȃq4eAڸn/ ZX+tLiq%+Ltw/F6҉sĂۺJdW0IU/ND{cptX#h}KcȴbYPQeǺPr9y l\9)2ӊKYJr0Gn|rwŗf'G[ylqqS":򠂓Uݳ3YY QG%ph:yTIiί Ӡjfwf.ИV`I1J|l|'h*Q: Pw ZNrc]$'5k SC_l|4E~Nb 5q!6ZNsZ,\fByռ̉q4K蘧IQײqû%kƙeRvwP7zh"\ȴgl2gª\@Y"9ZNˮ: ?(wjxO/c:Qfj[PG=4f||ȉLw9Ӑ bo6m6i , fԭT {h̸+A`< d ~-oUKuԙ̩3ȴ I n#8W'j-0ϯZ5=!썴P30{q}5:[CzmM˘e'RA\U<܋C# ws]1Z ڠ5>LyEkbY/c6ᡚ̍)%cCK3$U 96nh }h6c%Z/cJJm3mx cd\:ż>>&{ ~H } ژ>&1Et!hȎ :z`?8.LhYWMW H*<'>d,PX0*] H 1;RZOt{ߦϭfKb{PQ\`]6Zg)͌iK/'{c{0 x=Dф!XH!a0cP0L`i䒥Ϧa$ep&VgϞ̔.w7Y~/A_jƃ|^ǃDc/2& ׌Nu@Q0Oj4޻2%2.nW1}uDn]G p!cd<`^RĭIfx2(Apg_dUAR`zm .1Q"R1α]/.mr LV3*&%)Oԓ]g! rĕΒǪwjVЇs]\VjӨRT59wy|t8WB`oϕ0}{A= dgIauC/3Z( 's@6j~ZwiC!#^[Ȉe/VϩdCdQQb/srտC 7#]9t|cT i#FPײzɆGm1c'ϸ畐vLK r?<- LW,݄Vs6j)g/rss}&c οQE5< 㽽T-6]\Z"c#3,o`{\aQ. HD?g7ϐGz 7ԡt8֢o N/A_u`̡>txڃ0*]w[p9G(3%$V.Nx*;WRP+g{cptX#N~`dȮcW|FÍ.>e,fp+5}?`h"#9”,n8UBFF!pt>,q 5tB1jJBS{zyPVcyrU^@Aut4] Jr^k靲Dt&eUKeRR&RZ>{b5"DW˘~*;jQBU/ 3[ |eL"QX:C:?J("d K*̢>Qd?s:y.2& chnB[Z V`˨h2_4A`EjSXZjcwp.W B#5 3 `=^MqKQAWu*)1:r=S>^$eZ]DUu)(u:3L%4ę"oLoف9(&w޶*c_ZCf"3k1X^~xV#/ljI`-;%_eUߛN2&?.Z5teL!QKY_Y|yXovpɍLZASueOM#h1}B<;B̮6s 2oEOF0ږ y! ɂ"&:kx\Xr˗!+=Ξe!(}0NPƼPHk-K˲QQ-x.:0;L\j0yݕ$N2շFOLڎ)ou@';8aUAQņG[7}"^N |ihD~-ݹɰ-aCm9 pZғR1~Srxydzd {'2_R-$B+9|gvןDPmm&v;skEy E;|Y(̄cV8]BiCpIsI~%]A5u51wvN9xf6c3%c6.|vEgOՑcptXl&˨q6eF<׵(+Y˩cTxc ns+cksI@UۘF^'J<,d(rM/(n/(P#:$]q VqF Si=~EOD lV>g @!n4t|[ FhT\(%x+NY;"i)3CtRRQh@ <?nwlP-Kzh̷`d,QOX~̨.ΖY. &8qO4Z6v~%9I~UI_fJk|<"?9.\h:"pt;ޢe1kix$)cTe60De9a:SqW24S4=DyL76G_O%WOOCL11gO88p]@U 6Y@uoϯY`Z9Mv@^CsÌ#7n{ @<|=nO VZn}.+~/)Kj{(wUc?݄/? +(~響{}ꪔDžp)deԃ߯8`x2R8!p֘[L oGV.vf qoV5?@Gho5[5ch}{hjֿ='er=_TbLҊkUqu;.^hy׋k(tUlEEAaE-g%9˅U8<@Ji͕;,yՠ}5 n2>I|'r1>BI%T+86&X4 RcQTb r1_;rKR矵Zλmh9nv?rxk~vR}\yp}5^iw}Wkĭc~eQʷO{BE{Jۘ;k֏"9|} :h=>~6'\V4)6GuNsI IpnuU9fRBiUy?n{h?#2:XbJ;BRSxչ`%Ԃ /T+l8nz:-6_s +-{PR78Ym9H=U/܋ _% 0FKt 6A Ba{ 3 '1OҖ`V! t\˺;o[ _3]TB!S({JqzعeI S cHȥ͝7YecF ]v.nȥgY^Q / D"6t1g؜D0qFP @"93osюw{C.a?&Aqc-cXF YAOT=mc۵ͮzٕ4QǥOxE1ÍMn~Y2&3xlt/S,vݷ )/w[P| @t=lc9#/wl6^m+%<ңՖe[A&*bFk ^tX4#`:JGWT/->3@$"rZ2X-f3|ASѼS?.FPc)3 |L/#.3% h3;~$w%`!Ш¬k2A<<2AZMn+A]9dDokSwU0ǀfޡrJY_)(E)o (C¹َm qrۨ|7;L f+8N4Jg7+ /h,Gh~,˻l O勢44p^|H`Ut"%~bm(#kR8ص9ظC'w瘙h:lrئP ֑q[a:u% hN)Kb]Qd,TRB%`&4 _ׯC2M1f^ovNz/>p oi2(y*+hːRpvh!H19UJr&qE`Y K.68pf0H V)\#'=#:Znw('Մ?'yNh ^%TÑmHCζ:N<:6 fpZh L^AygS3Fs'29YZxxBM@Pj3@drhfѮO\u O[q~>0s RYY͟/{lObdS$rdѳW]Hgee29J9q`'dQg lȾ~)*q L Ke=*e0\l0k(.ƫ.J+sb8/7> Zh\< ۟~fFH-HTlfJxO]Ix¦~VtikaH2?v9N^4x07?NzPrIS9M=6׍[Byͨu%ҋ8'.'́hq1=ݢ v{mhM, *~ztcrǟGWGoR| $~dN2n;\"K O8چl^lw_ͱ 2ڸѽD#Ω@\ЇPdl  ʁΜoGKt(J`D8DIQeJb* a02ClAw}m!e΅uS8>F.f´)󭑲TMR0*CP\-< p[qs]lʠG̬@fLiI S&t4ZeXRۏ!5pjʻ4W굯nBoxs1j |n\Kvٽ*U'nsduUV ,U-tp#}Y[ƺ_P˦$㉩TR$IAshp(7W*փ+ɷOYEtv🡿A8))}]1e7 2_0=K%l.kg& ~x;KYJc] dIbd"գKU0E}E/>scWJȾHG Iࢥ7^ ;j-Td!P`)Cv/0pC8?NPOp89Ɩ!fdnK)h2.!-4pG`7"_B;]줃J-ߤx:Xd*Pc6]y{C%xCnQ\Ld84ԟqp!EKHV{+ %G<)s.:hg z 0kQ[5EOzqW x!? Z~(֥m$/K݌m_{b6,1jY[ӜF}ʼnꘌHtLn`^Q pN >r )iƻ"Fy9b +;^C# +; re0c~X w%6 `Ԃt;ҝwC9ҎA[1`ʛ$+rHIkNJK o#~6r%7۴ IbԒDZC8Ŏ(OTSR:cVϜϹ%җ&Dbu OP}9쇷ϡ/3ŋsꧾݒ#mϝζBҼ'I CxLX`p{ypݤنw#YWvxvy91< pE wk;Q._ww{2iS8OLU&Arެv[4C/DW=8!lgLT_sakP*$In.JdQR0[J!<R  /B]B6ɭHVƯ,-Im_#1gׇ<}<6!P~:PMtC]Jt`އ\& 1Ml E.$ AU+|z>Q>GXohǞbe\he.0T;DŽAƘx=CB4LgHJ", J;VaÈ Dе^Dc-0Nahz-U [⒇9|QόA:gjn|k_rZ*J l/"V\&#LNxN$SG]jWk` a(ԢCn)\}m& /֓D8bꁋ ?Y3YkŁ CtUZdBWJߕ56ٯ_gEpoq**p|?Wm{4v3C ~ǿ&~9 ?2CN0/98\m[_hՇا9p7rM(#~\Η߯s6 `v?ǖmW#U jN\}TWIb9;"lVxé︪Zn,ضu5\^;}zt_^iE>u.$߁41u\B)ƅP.w"粔LRi`٢(Bɽ7?7PQ}fz8Yfys G! f7y=23otaDQ5p`nV_ 6HS3k⃫nk$ש ~g;wcoA&דb^o3[مd&e4y4=60wLI݇͡,p P̳ɂRwKs#S5M QK-RwF`0[8W')Z! 0>';]^RIjr1\θﯯ7([#5S<~~`tEphXl&sL[Re{Qm : 9`=XEQjaܑ ۳h}[nn%0-F `UoZԅ#nAwķ`c ?IV-_whEs'ЀkϢfFIdB 1p$U.ۮ#uY * ^S[/*R) NTs)|NTmoF.,-c!es~op梦~%굳vmR ad#${M*:i iE"Tߣ&kvBsYdfejLuoX4^y7]Cz?6%&yj)ᾍh㭑f i!hJ/De(̔2OiIx%kMϋ^Dxصjv9N8}Z*8gBZ"}=@@}Z uA }r2XMx1N=sBK:P/C3`U`H5esI:J.&AOi9QB㒭%loy͂)h$(IBrit8 (R\a414_ߨqS=q~uhQӈOQw=I,ߏ<|>ȝq!@rFngHԳ)v#gZ^bbb=zh|0|Zr y"OuxI,Y Ayj5`ldlg#nH2.7 n5+=ӂC5Ėσ͗GZ~/f?FpW k{IHnDHk Eʀ9'!Z֛,vYc[;#\WeH ?rrb7 3Մ:dQLnz|7~ ʻK,|>ˆܮuq䏛{v6oˀya=؀YTQABy0!M@7vN`ԯ'0iAl !ڿTp2?~;y_(Ehl{jװ w ۗ<ܛDmw - 9An&% ZfJH~w6θl^`w#۫SS9k5!4פ:__T-*[p@tF[Kw[-w`p^<Q k:8ѬJ [C"/!;.7PW] ?DÔɴԏj C L?obo'^?=fEL5$y榍zI0i̦Ż{y׃t݇Ip &3K`4Q] RB/!$ I?JXpw['5Ss0$!*Ϭ&So=K{:\8ʬ%QC!e2 WhdGɋ7\v M<>G= αt [ !$G"BV^:S:yo֦HwTZ+n|i`WJD|)߻.-Q,&hHJfH\Wp.A_DEa=i+dUz;I`9Z?,? Q"( 6xF2gc^_`U4TT$B2Z pKB=42C.mFnͻ<@ZP創.`A/͌e)l'i4QȌ |׵͚&|P[="Jl<%E ;lDUktqo~^>-7<<3"Z]F' NzW/=42c.x_v]y'OGϣ]rfaIt IẌpy.xXRUTHfxO~ ֵ%Ceq:E 3,>FV2Ǟ>3*a6s*~g_^n3 L-TWp,& X[qG#38R'0]+gE+rQW+Х&*ùS!RzX"(|O$zO_(78\\~^} L+jEgf^D]L8zraMQpQ̨)2cՑ;dQdrK g(ZW#]M_he%aH/"p@2%Qʲzn/ eTT7W&a#Ɓ e#p+Q(U5\w4&H- cbƁøBE5Qx'cӰ܇Sd#u.]~L/OxwܳxWgB4y9*Vd|h?.]tbO.;!wuyfӹ~#J;Z1)vv>?XqF9 ~-e~*N ݼ9eP%RzԨ1S;f/."}{zH(dz+ÍlRBJs2z"vSpT&z1V$9 0kV{"Bz| `ݶ҅m)~V1ͮæ4٦h{m+G7]Ow3Wίf)ޯ-_ZJpӏBb9ZT9thg5i:rGf0|utښ''4բ?NOo6_u.ƏCdßo `zi~e#l bEb<~N8M(MZu-z1?ki /[ l b')=t }෠N,rZ0'%,{umŜ2SY^,[<-lin"02 _ۭtny>-wUl=\RzG6}ӐdA+4|hM:}4Wpn;y]3y1ݗgd8a>-VßNMv=i:1Fe4t) xxlҌn)ֱ3(po=*%s'P3[ J SZR;ɯ|9+%C>mC z--Ջto^D?(֝BYh]:6dQp$ +l2)uYanOfu8iʡMw7wU#wFqx!x:;lR˦w?j+8ERː&p_kAcs #oGM:XK wԪJ7{,AEg)nW^c|]Ό3~B(B?[Apwjo@ AުZ1JexELOV H7vuk[27VtcE'Ԧ ҮFid ܌S&9Vpːp{WQ,F%}62L +mQ(,,9hUU0롑ʯ" Aň*B3ĤB)$6D3JJ߯ 'Y9KpUVfĜI=42CgZqR)W)Uau NG#38 }w[r;]ƈW^c p8`H"q=Gfizz衑>l:_9a8kCb.  GPFnp҇.˂RA(ҒTh|< W˗j|i*b6-CFYb; Ӈy4=bU ,&zyI2T|n lbBڰw"QgDY%0*+y"UFFfp Kqԫ -g;Y{bI>bDxH3U"(sSK1+v,$!WM8Bjy]'ME{D)X֋G&B}g@P( $"\ROΓr:˨EdxWfV =>=42Φu_ЮrPԠbh%$عާQb3$&Ms{f@dtriaD#GnGUGiHQe&s4}fzkSSGPM4dr]7oOEA4I8`mç= P', ,>8_3FVTrq'&E / ]3Z2d{dT1@ Bp[rim=t&ۣdTKv*#ʘrlA=qoS<S!F"bhle#qd1k֌]&a׶$nɦ}Dϻ*1G %5{ g \Î T݋"}zt65DSh2?:Ѷuz[F5h;pT˳s DP=bUiA7 2extjSTw؏C4GGcm980QcHnYc83ۑ~[' dD4*[R^͗mbc_;kȟP{$9JqcHp<+5ƥ*zII養 w |BiaZD{"j#PRE-gm~zt>ut"U)Oi/#пdWpT ތf0z++&cE] kZ{廝רT=#o4Λqbۋ/DZ/ ;l6).}y %(C=Y{"6@{3;-^*##og/ԇ»xyNI;PAڞ2Pt4']E8%剁ņcdAi5 u}T6Qdrg}GXp4H)6XRZd&Oy~2)5M"Nh5CpŚ5%j]D=Mys[Jf3# L~k5v[Oҟdd/gJOb~.5߿8;<"amgQ02b^4H!7˄e qZ Kk0huh(,CB8f1, M ; #%=aaq\G 9cxF1U~2RhsXVhpxhl%ߋv ŏ *w 㴿v9';8S!c>epNۦp|ٍK0hP\['zZĴ1ً6Yz]ŘUAiq7`d5XH$a椧抹\J¬P̕3?reBqG^/-i/ҴC`_`Soa{fRQJ=JgB|@P8g˸ck;؅}hf&:ܕsZ$>ue)aE]|E]!(xk_,y7sPU]}O?2;1$7߿>۵ß- Xp4Xe, %*di<]:S:I+徹XvI$H{Xj`>iӢ; 0RHU8L_kΖ Gҵ7.R?~9sf`Eo~/>Uw.Jm[ٵ}1K'<{^xQDRCYAf>Q*I2L.2Ghoل>ǧRkSX)\k%3racR,(1Dn`};9<ƹu-~/v>"?Q5tef RW==rFϮ'vgF(q\9C^`dbc3OK<2VoMlv0 YW[d / na߃1ugnꃟzX{߄9V G𾽀iUPaIA<= JZа@E< i2HN߯HP9œf9FI|>/'{_y1MU6W-y5#HaOc#ʺ((]n%E{qqφEDscb<)g  ۠sy_VEL[-K  ހ]EM<@H .eJ|Y|g/3"JKK~G(o\QVj[AZq+ߎՔsSl4UNyFqXg*֖*! A 5yfrɔ'MT/A}d>HcCbc8lZF{Q,l6Vu!>8cN' =~v3S|?z2XrtgUJoRU=hwytWsb9<V@\$^$ 0>Wg4U6"W8ĒGĒD|I,Ka K]3,STB ",)X ܢ.pHiBkc耑)X|cv1,jda\liE19sqa$[\ʯ+E~M ՓgSՋ/#XJhPB+K?>;"N yFH2ՊgJzst߁Rq޽!y%-Q-3AO^/>GFk"%B6I+#1^P YtYjN18DH)%_ɛhVʷFY^ NBGF7E"4 or2xNOS Lvl&FLCeе㊏팃ͣ ˔wYuo>]'viȵmTl1.rlZzm-B<'Kj ўƊ/R{7y 4n‡!Tb`&(#3uMF-AS->Yj{Zj=BGgn7[#p#7%ߎJيt-U׈6tGWVI7+EdGi[3O&Wgs`xk'"T3 RK_>OƂiyDCFm2ÌfX}Uuzayiϩ`JzKu eqj<1QAC#.'#΂RRN%+޾'-=<1hwhmf-?MdmĄ@j ,__Ga$k ,-\O㗣8O9ji)yt0BUJdGXXZ5i퓔 <Ʒ-Ӕ7BYz'{1j>-v:̲`'lP 6uY!3Žwe$Gv ;{m/"H5Ic#x3KdBB,ET~WFDf(je3l*YP6Xi=1hAtfֶ'Rqēwgsfl347(;ἴ;,(_N`T kav틞. ~x+zѠR"g.,z+2Y%rbĽE.U20E}C#ŸA4CdO\JSxo9& mSEWD6_3)tf]7+RɎwZ0$% ,AdI]L-ΞA{1ȪɺxIq gGXcS~ YSkH i*qbvSkj5H <4 V&#ҧ!R.A6r&<.*u$}BR/j=fm?-AX"BVi*Qk^'+_/$29,ClN Z,n:Mv9@ˋLDR0.roލ:>ॎƣ_ =5+D(b Y<\Վ0 %"Q= ܺdBOmEmz< ?8ԓ|x^S.m>XwV0il.ISUJsM7%^Ķ*Kb%Wzr =#{1+i6CޡTLuF^]Q 1-ޭ̜SֶW -\4,4M) ỌH ~chwb8YcI[.}V-vDz9Wnm u{KjaRTIλO۶D䡗̂. o=֝X턹D;.I*\*~%@]9NFSDwŮ!:a,je>t)wq]+< s]|ΝyvnSizg'>uwz/QU?7r)vzwvŻ8'b8e51. $bt]xv.qoOÂw[cRa`{M/O{ _B^~*h&$R`FtN?|߻^1ٰ'¸O@KO`|?UOR7\}mkE+VH^[zs#+k#XtƕԸfzZBUL]ά,E Jmn5xmYrsW4)ֵzġeW _mu-hΔu7{x\̞dkb[/Yj%^KܛVz]Wpz^_3Yl/YR$I, imoӃ0ϧQ/D\,pڸ f"Ү6^K-V^Nؗ&QEP\m1 F7-Cd/2ذBE^P^ꐥ){A[V$mNBTd\iwxw]1[H]Nj[v;7tޅ mL y@~+2\2hYVKQ`8}gl7YŮ~eΥ_p >X1OײOcLJ{}/je5Mb5%ބ R쮞f^7ԲJr'^,_q ).q oL 7~7QK:κvcD <0>THK^W7źɍ?Ƨ?ž~~w $T:,+B@)1`U(ύhna&ǡa#`<vV#G 5feQ'Ӱ co"*ĉ)Au@ KN"ġ=<ʂC9OݎFwU",H $maskOkH G7**@2H Y*۾Ѧ(Lgw aZZAHK؃0L@p%€y/E%ĠTrVtȊHOaV21uA*B`{Y p "EUThpYX>6AygcEl#`p gF`|P/0hna&N}x@ .jTel E ec֩֍ʒcZACE0,#ǣ@4HAã08e`*{W6pf99xuڂ9\@;E`4;Eb\2*Љ(ӭ""BwD6ܙh<8%bdӃ&"\R0M6CoWBBXx^YýD(Y\fH `h,O^j`Y:Y[cA! >w 4bCFT`˙BΉ q9 qkΣ08͖"hvGY1NtӴ&bŠ0,tXtj&7GsZP*%[$`58amQ/ iЛ J?O6 |Lt%%[:PWJ 8DŔ.eauuqu"hi<|WLY=< :| !>Q$UUaaq0r[YViK<*]e^w+CyFw\G 7Ww)/ vC{yJ??z[`oןao %eNW,[kt?=SkS?OOI+L ʲ\hf+}l6ž f|_,Ljuާ3$rz9b0kC^sufC#I)u$]`*w`O&kAo8-&HoYԩdm OU;~sgn $[lcɺ̔!l]uUGe5Gno \Ղtm}[$;/QYxVKrӇk00#qD(.MCoi.*mL5b 5J@/jRZ1aVJDpY+Dfq"Ѳv>]isD$}oPv# `G2" 9^lWy=ӡXoߖuu| j19Uea -T4YY)g9᤺g(Ŋ UJ W94TeghPA ZxY?v0\OfTQp̊*|E5^9-sa< CLช J$ Qt87inR?aB{x {h!ƙNQ06*E(6|wWJVliG9wt5z / m~oi%E~J .Q"e9gXCqܨGKZgU-BD%1!B%G\`n %U%:`}FPq}E <- B )uaxrXy ΁ :C-o*ϮhW>Z@T@ ODn^{󧪁sM"(V"MC˨4C{xGx#|p=< S89;l JJhc"S%%# #m":̅|ڰm. F Oftn8Z-l|/+T{*-49}]j?8U=I4-WBܢHJV*( /eͲ#_oȪã08-(rNZs_8+>njמvG[V}wFօ[4ã08} `@7r#"JVqH igxË́( N ~MJ )K3*(اNSy8 =O^ trBh3A((Op Si)`ĨKIHehܖ&s5p;'˿ã08LcTsS1l nsŖvGc40U*ĀKhREԙP!6'L fmUz]=~t-ԗJp[8-vc"I0UhDF}8]n}y#rxŤX@utՠBq ;I6gXO Y]|uuç5멳Yۋ,*;;o?| ** C֞f;Ǝw?]ހ8[5~[k #T=qV=<`;6F8ftC= ])5Э7[*o`B|Q7xf-`V7|aӳvhX˨+mSL-"uީCZ8P ni!*ȵE+@TQA寢oqº B6J3JJe`@"s&aOWbAsXxL;Z> F J\=Bˆ8f: #8aZt+&_ͮJo8[&wzrqnkJfv:JN]>srwyW6C- W<ƣzo0cY_ksNjʨg/&~Ϝ~nM8]zE"T^d5{jM?-/7rs,~"CA "V RbrQob>/Eqz?وY0~ꍆ^> IJl?a<+| ?LH ;LjcHJQw[q-Der2L9!MG|u`<>6f!67"rzh<_ToN7Y?/6?L4/ Gm ]pIvyzs Ա %gs b)'ڊwm#efKl&`g/l>tȒ#N<﷚ԃII)[r8Ew7uNNb#&ZH0<)c"ҝE֞ד@.}] {sVuQm.?Q1bmn\ۓdyp'D05]AOV5%mA{81=xC4zrkVIЂ+WB Teɿ{d6 f̧g#W'VG*0c,6L-"$SmQ_eRC}ّB%L/;DjZ%ZZtKZbGUj)}WŞהn _ߒu| k y"evJ!1w9\wO˜gOy5C|tKj:- vҳz*t2TʳaQEa(pAls+~-wʡcC|'ָg:9T ??Мحkm}2!GEVr74ź =u aN-&g\;r rP#k?ϜyQ(S:B+Wv}~+"DuYwO)Kv3a.JMHFʑ28c<`F%epIL O]n M@Ȇ %%c88/ L;ҠmPV# $}6̙eFHYhZ/g;O:z׈V潼vxOWU.I܋"`_VN ln'k#~5k;!Sl88>>"W恒Q\;mB. . ,}adV98sȣ E>ԫC|5O^|2j?:"FmVZK!FhmdtRりQJe8&&X! X`v{J (WK 1`BL1jvT*0FJy>%Xe ^#ϺN.j9;gK9?rDo|Xnh~y)b uXR/-֌H@&Sɘ4+XqT {/nƮ2M ̆OU<_PWwu==$ijQ$}Es񞿏}7&H9_W X@Va@| w^9MK1hWrӚ঳*}a.A`S4|i4_Νl)Uω`9_1_Y$i>P"W҆F x;16|C@%A4{70k@f>V7.w G' qrR*Mquy ! ՓJ" g@nꬺ<6#@>h^cwC9 e8jo^m=I6z%rsGaCm&[gbyiH4@xR'IMƣF9٦7GY76k۽2&kVl{4qL]M6YR|zTV*iP/[0 /?8~˟yyq_^zs:{go~y<{؁OږɃχ,@ˇ]ƻ[LMajJo3-{>-&_KMnELYJXYi8CRjů%Ŭj;\5 zOʺ ݒg]zX5nU!:842RAK$7Q`e!ǎU/՜^:QY .S^s  (@A9ÓS&-GB)TdObZ锥@IT&m/%}_;9 B[Qt|{WZՁT*}"{IWpM װ^ނZʄL@yM>j8&HSE57V"FMsFV;|rܬI\\vQ)$84l08ʰއ1f|Et(wa~9:_;E.dH<(ir"ˍ(D;[pE_1p>-k;B44ӛiVd;+/OSP6M|7*&O_Fl\ޓsd錠Um S띱Vc&ye:1O i}ڸOFAA?RMm:I/cHjb <* 8CvYiI(JT[)r]q;j6)j m^7';o 7Bb! ̓5|~,Nq]B(nG `+Q[~W12#XP1g& ;s1=vDrEfW)]vмòrsM=7 7hXzP(0G J^B:lF"R:"{Xs{=[gȅ4*T0u4H rlmH%X>r0 M'=MXz{F #$$8XG!1du4(&9bc-Ai=ziG="wǪT*j<\7;_7xP:1;Tb2^V@* B4LO~r壪@KJAs;BVOWL*^!iKSuzaO\TBYr%5(XII7ԖK{Η}xuqp6?*wT^n/m^9.%ؔW9HNLRt2?%QXJRr 췷/D J!~WV%^~zǓ*4%'o<<^&; Sd.uN; _V.wŬRo^gUgPZS)ӲkWRt! S!Z4x5hƅجVmm KAݪE<3tny#V^詹 lgZ{8_!ipYU  ]  ք&XS͇(Rrul v$6oS]wޒs&j]6?Wjنlؖ5S]N?Yu<VhoӡM4ș~_#N#vD װ-#8m%Sk9V"!!+ei>J>)FwL+'YrNS9i+mw.5jNݮn`S{gNݧ̽9˶N{UMp3{7/j,mlðl`/./ tzl7$tn.~}MfUnjnÆ+E8~|5ͷzhtGeϚrg}/C{IR=h{Tj ȏ~c^Vc$fL0{}24Ц-ULprdCuwSH.iߌJuᵛAM؞2¶!Z  ꉚ}*|vZ^;bB1 f}2C(pX fx$6(~侊Lѩ(-Ww*pߖ)'822|7Rvfʦ_HKœ;T: 1?.xⲻ!8isUO8[r|'HqRFUkpZԒ)GUc%to ^(g '[Y"fy5O"x/VZ:JTČ d d ^''$^p. >[!TL" ",)w7x#UfjFV"`aE%y} = gyog'۵ƚFf!im)R!@i =k)z:)&Wf2"6XdLiK涐S$o))LL̒^W!d"xT.^ՠTGY(Q0bV"((> ~.^d)Zt,$Oz$'J9U,#'HכҪJ"X'ɂ x{;ie@bUKTio eTm>4)6QX )6I4+!i3L#1?EBn,e% V#q 1ϞBJ)z}RZdT&@Hcjv  I5%K^I4Ldڭʢ-IR+T1z׆Qp5rp2X 7و x#xo\?sVoK~)шWVe XYC򼦑:?EB𤻳:C@5+U]#"ׁ8. zws$tUR8H"7G:!q)zGmaIpZ7SŠM x}Ƿ@qftxS;t`11 "x7V #29(m^ʜ&1Z<9EBN|HMR"l,DT5 U7EBܦ7dF䃌jQS$t9c>KI Rem}=IyXw. ›"'G&"RYlnMwO\b45i֦=`,giF)zB<2%htlAKfXO<ޖ/T>2nY <– ؄|$H^YJ͙Cݔm 88Y%)=Z$]s}IaE\-%"D)ܫ9X-g5gzˢucNp>x) w,?xMC"}\/ٻ6oX.Wǿ`EO̘Wmv\W)ZWsf{0݁Ym3xvۛ_Wɟm'I"y9D B -t-ґ?(qHC ڤ\t{N 4:T[$@Xhɣ"K!Ok2tBMHm6W~ѯj;{Ŷ$#kQ\$z)JMpIZr_U"dhKhw0w6'z^x䨈Ιs?IYWd.cH- EIdHZQ9OZOQʴwDFZAy*R FV% N,Y*|13ݳ`|ȑ.93=rSACJu RW]~֚bhxbE+ю%$\6d"w8KJOHƋ^PJ\y#?>Uц&+Fp.u#6_zU~ sD醻<dhz"CB֪:2ONrS]B~EgR0.Q{ɽ2.~ױ%bPU@ H%!'5di E#pIDGliNߝM=+Fxtk-^VyJSTBCgjq[[#zR ă& nɈj\<9simp Gj?&56>NPm[|#]Q { Nw_6=']+<sTfX)6v(lRЭ6oe,Z|?UX/==tK:8o[8ᴶ6O7p[oY@5o?J^Wd ~}8051n1 c6+< \c6&6N{4dsuX<ҟnGM#,/ە\' o_bx|blwzgi62s|uYH w({OEI }L3E$A礶Btѣ5[LQX8shIp_/Xߵ}͎=Ϝ$IdTMa >ymw\(P2-H[9{x ,{ <_9XuÉ7)WJcAK!2>sSmUہ_9cjgU 4%P675<:G-,'~K@['ۣ3IW[@HC޶o]ڰhF0"ZݖB(֐7UT!-*DZ׈iz^ #8? E]v?bl67ruUzn˳ٺ+[_j@m@I\e[t!~Q~}a nN7A֓%pgȳ6YvsMJӆsވia?{qSSp{`˸qo\,a?lc{UŻy,nI/ۚn 6]wUjYvy vPK?0?^Aͷ6md׻DD](.:pJ6%(R&)9xΐ#r(zDUuUu V[6;1@Ey&uoup j >L[+ͫtTcJFTR*vǍVrmi"&#|+'80h>x2#3@O~| ҩݪob{2O|hxs[-˽]9''~Qjȡ!˯>'|%7>U=[9w-7/ݣVliɕQ ]H8aL]`Il\+-PYtZާЩ U@qw}o?_c޿˻-?̋+S hyl w?jh*bhJ\u uq؇qs.J D?=~v>nr0~A`C ZsyaHnj+0m`ōb ?糒ZpB4`cpbЩFT|L[KeVQSoz,Ɓ;=]l/E5_]'Cǿ/cM\awj߆;S,wDe-\) ɇt-g8D0! OF $$`Gog6zg4fNjXBgyMa3C\-Hts%B)o}7FBnBtlFo@ BaΕ~N&w|IWUt /jZ ̚^Y^.`D^!ZKFKW7vMwHǣ0ow~<7*!Q{17^nThRB2e$:6-H}T >l\/qL jSR2`c{mGP"A"fmS9w#]~X:2ط,m!@. bd`1e,TBS:]D?q"H:.X 4z4i 'i"Ҹ$Qjo1[ -9v&OljMqzO`Z6}7\#cw(]&mq6`ELI(z$IW'J5Yljz9tžB&̳^.RL2Jin:A>f%՜0~< ks"ngjYWLeur +7yцA wnx0vat;;Вn:>^}nw|7|7xxvh{3 5gM|r(5?Jw8W,V3h+?#kgէ,ougu5687+\8+PW@ُz,i{i= ~T(d\qEx-x=2]`HZʩR,UVJ\tܨ.|M? u Efs>5|elOOU1URYKqUtݹ*h8^H1 `tﶶBbdF/cL.ފr蹫%w-f6LS*W/24sѬy&FټEJWQk %I.P!`6%h)-nLBDn2` @eE.qVq2 A3 ('@׆T# Ҙ.8lXcoǴ (QHLd$&F:łKsg Huc N˞zN9픜vvH=]﹈ju'ٻ<:wĽo^@/ ;8wqOၱiT d`hsx}ב+t~@rGoZaǎj_@`2V(1CYEYxw-< abW,>HgTٗ.d70e0/}ʋa%y [ lT=x/:xyNcKI=vyuy*ҞN.O+0,Yydce1E4^JJdMfSr΂wDðqϵkl?/ y)qvyCz4%.TQq`vKڳ+vC.~VʕMCj=l5Df`5hdLu6+G?S2:w⦋a1bҙVgtF Tmº2,4DTGK!zƌƁZ1ԧMTk%i=B6ŽNfXW 5L3=֞KQrVW #z$F)ĩv|];n Atl مai=sBq/:},V3ˆ'RV\5lPΠo/WF 2)zO-.M;Be&:סk5{ ޭx͠Yp:qҁo ˷q9Ȅh˾[E|֡3toGݵ !X>1`"]) s]f7fѐYM3"_y>e3= F83tv B5xhJFS \W`f(5%#)G`eZAnvZrI+PR2pDBJAY&D4™eFH8qΦ_gTf&GY20aeSe,Uؚ5UbGbzFe}GI).:8\Z%-~$1F}Dzǥ$DMReMyyϹq Ϯ ʀHEe Bh.B ja l:-,o*BMA/KmXY޶ ! %h'8AG9وAĪQH,% <#-+?$>ƄOPyuj 8T.uQp(@$5*,jMzrQo'v҂y_f0&&QHA`̂@qG# 3hH]Xzщh:zh^{ɱ6$Q3-9@\o:g{ Q l]\_=tCt.JEatvѵ{w#B'C<'<,3#@aјvoȈR4cK2Y{|6&].b%p(Pʍ")NȰ, -J)~ߓ},$ 8ٹ2A ts.j/2}U~q%L_4L0_±lٻr$W6tI)鍘yy ])_ |l`@D`!dL&$ϣ`2sgΝ⎁ %ahx 7rL,K+ԯ,ܷ&7'ħf/nZB">IZ.i-W/J\eo:Yε$dUparrQt90W7ʘWNa:B_{Km|ÐLu&_ ;NB٬1D)JΘ'HF%LO2c>Jy@OJ'{Iڀ*2#f !c"P@'/* ɜwBLP9`/<=jf7O؏~[llKe᨝ Z"wfᗃrإ'j|sv>"I(۝/Q9VqK f;P?bY aC.i@eM Sa X O^XkM`SS IWntt/WWAcAt`:$`IKGk]SbwHU\? 9&"roPbt 2M ]P1V,BNUX)0M)!=pcp[,F˄٤R2,6=l~ Jdv49/|Uf^.ڀ{tH(^w'eߧtѕ]Ybb޿˫Y`e$,$NmzhSwdYI4;NuG/yrN;:]]ujRˎZv jݿmƻ;/Rbf{r;=l6gއIد؊xZ?\So)}"/[bm[~hsg>CSCK^^ܚYuR.M֘mv/[ E1guw`-vG`uM~ySI ;Fחq"E0ƗggE^;O^sM|iX2 /[0zd$V~{K!ee{Son;jr&/ g)gm!UYeo%,ad1}96ĔmJ8,|ÎwyVjVXZ춆ϨR6I[f2:IJ|8ur 4zbϠ۔YhNV5)NWcW}ⴁ@$Ρjk.nyQh/ h0hD0*dp̝wIzBy(4x0ۅ5dwY~W֥UX[\RvVP`:=B?=;%hG{xr VXlEƹ`ViUIet$ҙX%FI)Lݒm06kh05 ]]7s]o'N} >WE3#du:yQ4"KCFS4yz=-MgE^"5JyZ5͗IʍrJ2;f˽!{xx2iP/&/=a"0M 418-H\depYup=m]W >\P #~>rl ^EnrJ(|F@t΂.&*T1T=,KВ=ن巘<$7(^E 3aY>C?3GMf߯.χo%y{vz>G5ܙO>8]@*ه_-$gsٓ£48)cl 7ak8W2\+qT}X7 2> tx :7_6>Κ fXIE.HAQ؈036V2 hjȀ{*om}CFQfʁk bbA$LдcV± aH!3&۬ˈ ra$}0te2!! |<6`m8q!9UsbG!yq^D핤荺GR v,Y`bg6 ӻV*3W2D9eLLiKRVz ^qkGI<~.]٨zHLdd>ѻr($4jiFmY#` !I#.Y:eW[ dTc ]BKnϊɻX5 NH9OY̋4)꿦&r#2̇?賛Dz_han*&wA*mͿߜ $:Hhi1B 7")&xR\3RrNt.tF$mi4K.0K3Qa`h钆nSo;}*&2}o.8rB8\/DkEtvqu~ xzjEC.hDo7q>b_QوyEѼӟ;?7~}^\NF*j2I}kf$M.'awV;N?cC# ;8#] #QN,4`YŲ_Nߗ =| }C:6%_^4.󨈍S~GOS:D;|2= WD_>)_ϟ?_>qa>}Ϗ~'qD+0K9O?2OYCv7Ukho94R|ł޵25`{/@v7I|A?eeJ!)JARC)Q,ֲEtWWU+@|8{oVD/wK@wmq"ͬ&xă·A?Ur1-jT8*)1} Q}:Oj|P}=oA62ZG5=ZjU@޴:UI-}~Cc&2K;+à z#(wRW+7GfOWWWF#Bt>pid(2FɃHpoz,Ɓ;=iB9KmQY >.SU G"VRj8Ó,-GBPY"X6x!v9zK&b`>jY( 4js3I.>] +8M)/`28+.-VfD[W .fޖįLԆ2xgPPJ`ee8`^Yh*޾O HniC.^5<ʚP L,LRc$Nݻ_{-xr.cx1~m$7\|2g<nsX2V~K %m#5͍ehBK-c./Gq1G.oNob}VZ #ꌫd+cQ('y=zG-Zv(+`hKy 8mpY4uErP ċ!ŧ7#GZ@ WEz4]}:v;+?ziDhY[Wy`SjєR٣0)].``bӤg;&Š.ǒyx/ab37U@}6礚NAWWjqrZջHg+`kؤÛ. ®xtR#]b 58ML\r3|V?Րr~ 뼘yTɿTVk<L{ɬ# {o;[OxUy'҉!N'5{_ka<\Q:-HusiBFHr^-ξZoKwrx S֥EpQMжkCQM[!7ͣT4WNoU܌:^PfܠRmi*³̸ge:*l%lPdh7ݶE,lyi91 IOGxԨӲv^=Z/׃Xm9IIX9h^.FWfo%gh9%猥椺f9}xT˹FLDvr&bj:{veWni?mEs"]t9;+;4m?6+Booj~LH@T=)^9ٷ0Ss ڇ!5S8'߽;9F'?]h=n䝙xo!9U<!nZݚ2Dn)9dZRY H-OH\qb@+˫qa"K=|#Ӯt9Rb$ 1I0CiJD;`j) \o ) uh '2 ŝ@ahM= fDFIfokvz]|OVϖS`Ij$Ei,ʄ4C#m}4F=I'bE`4 )-f[" G@| Go).D,:ϑq2%‚5^`kR:+UׄR`=PŚP%~y_ff \ Bv1f~r@s{c.:3btVwnNv":ѐEWȸjb a(-'s9J 4B"51c&^OK~#xHU+:X[ScVy.N;P]Ʋ+-( (5|x DX=Mc5Lk1e# &WjX$"Lcc9@WX3dNnjUMIOԧ߾BVWL3 w:_Y m|>*UQymFav0)3Jh]BjuY^=s8͆!Uвmݮ#g7wxvzkRo\͹ wO<7t 05%Nm5nMycV~ẋG_ml2M.=w1(GU*5'waR`L@>u R%a?cyN97"wpSI{_~t9GiVHwY^^]\$Xƥl[_*1{az 73|z[ӏV@g/yE?{"￷cۃ0bQG"`"RSFDD b FрG!eLD:G!eЊ킦؍דzX~q_X{m6=WSm.E ̜.妗,Yydc™"k/%%D ))_;g;Ja8CndQ@;=ioǀ܇n%U-6oQj^fRޖh㪗rY^2Hf/`>`.xֽ'^(Y(|^&}ôZf'MʤyzAf9t\.&rǤ{M[\bÅQc{]d/o)M a<|qo:sl؁g2hpۈr4<6j:Ÿuvm|&*e\I㥹Gv5Js: ":׮諡\kCl(/E o {_\wpƉP#> cS, PdU(UVg礕8U)'N?bxcb_3E%Wqʕ%X%V J0c HhϬ|1cӔ 8dWЙC7dX\Ui @D ;i`' 6x{`=jҥaqIMkuֹj'ocLˣ$0/`k U c+t:Ǒ 95@ywE%Kn}+bz0p4Phq5\K_)tdEaXL]|APAӢ4s_28<\[hmu;2[bc?_*xPs^0iPS'SC'TB*dd&W$1۸4@0*?(RdKJY{2"=_c&BQ1G ESР֑aYA[R.k%!RHD#!Ԟjў9Z9nn}qZs_&zOƊ^q@ 3yS} V$Gl, f\J1Nz3i=SE4Rұ:SHP$("2H X,X[A*_3܉ѪduFQą"`ԧ6zYI/ޏ_ px$ ϊ_EzI6s+%#h6ZZaGIqA(%8` 4b17Qpj O1jv-`SJy繖`-x8z<3k8)n}i.V,˽ ߭HpK%CoK5#ٻ޶dWKf '3@2gycJ(&)ʯՇ(Hm$UuuG H՜!B"4P Rp 1n !$h?h|z>9I\ϔL),$m@xU6#!J.2(`侖[B=<#k,|̺ #>B'Mg"Fig:de. j!RkM%+jڅPmX#R ".X8~XPTx3"1Pd2Sn{:`e2{52h]q^>w SztKb¯2]4ϟ?v.l??+s#1m<)w텩y`HfzNFC ;Kv2$'JrN?qrۓIfl$Z@oZOHt6Nyt\=]bZp,SPO~ZbXܡZ/e m+Dk~۬V {w:.Ppd>:;!?^_Td=EGJjʧnVBt~Uv.CcF 3E*5%(멉1I{{s]Yw:r[~_ [ڠ |6xȱy2L}X~y-?_) -h#a ߓaYSF nA2ju(Aǫ%+'#K%r`|&́L}0ӿH6y?nčw%J#7Q+$hْ $&54Ւm۟/Uʹ׎}\"Bq#;A>rFq.? P? k:qd C0F)Rt.jP )K$=@ZíN*d1#l Ȣ\p/C=>'̞)UrJAT:ŨK9ҭ[2YtJ8|CDDSL"i4+D<b\p! 8}Cb١6U(U;= 䉡ph$!趽 n-%=&'5kX枲{j0V<-R{JjBHM4aQ>8e("ZfZcƠSbTrKP~];rbFym|zJӇm>YǷc?K;z  ~uǓ*?'fo#9Y \*/Ib{MʊL>&.!UT-8, wZP@ʔpϣBf x"^9.$GL%8CX7cFSBQ&IEyPRf$eTG8@q(x<5.i*^5)rOHz'au_hMU+v~㮸seÙ{ 8L ڭ?hvmR'!'%."X"mВET3@,9=7W r($ RT1lG؂!B*[eo<ݣCƍ;<6lS2ixzs,2{(~JjU9Vtt t{/tϮ8 V&QqFR+*8%S^$/9HAi H.BX}VSh$GTA҇*&hș@3CMP3@\~D\[) Y@/t#)xrjЎhcpoaB8Ue_}1͘].^|;QD+Дqf"ڳ+dr3SmyP10[yg-.4մFÚp&[RWaR={Yo}s i79b<Źe3i͜ y9Dκ!Gu6Bj8Lp̩?&W̒^m+e.+54rg<băCrecS+=) ]j^}g? l8֗ϓ L3?o=?|óm[v;h !ژ_A3(}wuwi1CwoZQLKƍnɇmz"\|sD~7I7۠3_DqmEhVrtЏƹ?c"zO*R|r5uUysCO>f/vIUg~/6k'ң]5{O>yG>7>g 2~fZe05LM0 U5ld հ6TÆjPf 5l jP aC5l հ6TÆj[oaCЩaC5l հ6TÆj[zb,9D/Q"rc1U)ZhS_W WrGZ>=%,(`3 Hz ! FhѰ u ȁUGB|CT_e׵}f7Lnzw\־In8N욡@hs׹K=>i>e>[7غ]nL[o?fm6f!l;0KM団䛽~u>A2~t'10pp[Y [lwQfxʗ/-3+cV<c>+3Vef;PG|_1!]W UI}J- nO-!2O=b>;&)^%b_X4j,Yk甗acH{L},qo ߺpl؞W:'{3c:spSEClN1n;L.:6aYTxJ&{v8y_M;eҒΓ(N*z*Z? <ݬx͠q=qr~|Pn;ԫDIQ^NS$(ˉ.xcQ)‰^9US*)0𙗉֊U_6'no u{)ÙD%.4"p#Elhi ,($NJs\ 29#Ѹ!0XC p`o b3 =3iT_ gƙ' ݮ.UܚӚA揟3ĆNV_(+EG;%"rZ9WZ)yI1_ib2 )U%])XYTT-ƹB_9W>;DD"q:[YB5BQťKA xTDEjU5U34)rrrEXfJ$4 Ǩ.rb /gŦ#|`v]X1 o[GzQAN2P2'm6fe$MBj5QJ1?}rOiBO)c10TA˜#UpGuxW&xx q|S E+9szD Xb EJFJuhy'Â(69Z_X͹i{5YusޫƣӅf$fR^D} 4v/b2]{d'G4.vğX=s>j^R6'@s1CcgmI %ȗZrVBLڟaw5h_Fǿ /@T/ÃCҀ1T)b+tJZ^ wE%Kvo^@Om7gFz'5'7.$<Ɖ٪(Y/2(O9-J-AƒEiC@ DNjr<1@(9&r6yJd:2,"`#aREbH:DhGZgC7U*P~8r|W7Bqj==&`{(wRݔ{K7. `0ƨ">bc%1Rҁrc@{,;.HFu Ji &$FAJxld WYXv;eypǖgy*kK NmۥwbUKs=hlS2mPrA͉hAhN?a,Z9iE28T{XT·i%ịUh4ư4M̨3JF刊`) N>:;-r6KJ8Ym ZkG+_]u~W~EYɳ1ݳ\BtrԖ΅܍ wˇOmhd8;owEGC4jSz̝v&PƽQf/h$&ު W_^E>;yS  a'a RzO&.Y D"NE)⯙ g1ZL9Έ2ʀ0) gf[N|YÖeztg(yTxD%hϭB Cku$SRj\P1J 2` 4b1<Q0!S&pZ]*GhRk zqyTql3NBӖײ0BRZa%(zpGY0"L`,,O9mX,-4v$x>(]J8K ua+HFǤ"9KFJ A$͝"bqǚx.WXa`b1B0k#f%6+c*t{av:n.Ӵ 0GϢӀT">bJ&1Ј1#L() =fKwN8߾,q_RT#juwga\{go pߙzOgߪN[C* Պt4MsV 7"p_vR~n:Z) vėZj0F oz;INw@Po'6, *0wHsIMk ԏLupxwX:@hDUδ>8 E(y M8Xo8pDZGCܑ~>&wezTւ˔#>M<Ñ !ȥ)K)T '^HlC}=إNiV+0`k\+1{|u7IV0#*)dFRԑDug^H 4V\]ƮBٮظ#8ol;'jr2 M?gThCL좆TTƺRraPU*tzBȤQF9?_-ᑀ5H́9&S;MֈHy9#r Bjs~2*#؞>y+ %>L.bf m f^;(y݇˳-3MRnKT2,ciH4@-upsGpSz5Û9"}T,U;bNEBwV2k9c `jK-eF[L9_/SVR& `"k7$3 TQ͍QmQ'<8F,5mVj#gݻK9*`v!F v.L \0ۆXY+m@"&a~D~=OLnvDWtᯁ^Ì֟j:D"1GaUQ]óߩ6JqfvB*74~h2 zU7J~l~`ePҔEVSϣSw}ѣ`zm8PsHF$^U}MC^.]iiu:Xemj7M!UXٶ S띱Vc&ye*s᱉hj4BZ"(RoqP3I>(Ϸ+5y)4"^wc&届!Hbc*|kqF3 sɭ.kK`Hj*`B <* 8CvYI(JT1Rrs^-gQp%Qf*]ɴl|ȓi@,4ZN-G'h <9>o·Zlm) )klcٽɪ%т g#BpIQ[M*Ffd j0D.)-klwzIu5=WrCis8P@,OP fh/#`{tLi]IC, ȅ4*l0w4H&$ڐ*!c}9AC:dɘV!!: 휘z ",iP%LR#1s:CZB̝uI.%%lBFke8\*j>wI[w{E\s9ښ:g^(Eja\6xϊwsVڬ턝0LgOoVɭtWhKKB (H;;.Jݷ &r|ǽ=ٯâ2ofz46LJ>gzek1)*_?|wV$YHOߜmcoo~ pfcp"[XpN`uː6"@`T1B$2z R|Ќ}z(mXtXc G=ܤ #CŎx)G +#PO*#{fPΕ_k][dɏCF99'eֲT~x LESZ.?ǩu6]"j\]:Xٽӗ<G _ ,c*wdG:`ǔ}=knܨw5޷՗^it:u^y7{xC9]M'7)7XH>ձfhp%J7Ħ(aH#`;YNKf\M M70w]cjnhhIо"Z6yM niv@e;\!`v?\; m= V',no'hZXz[$SiQ2asn^ ܘEf9t\C/aiCrK!Nӭfjx|nII%MR|8_^Eg80pŎИ-5+jݛGՆ1ݓNjԻiDlln񒚴ie 6`pNxSFa5a5IjOonL45̈́杼PiXE@'z,j(7vs.PT8Nx3N2F_\vƉP#> cS, PdU(M*MwNZٙrKLpCjٞ7['st^Wy%O!z1)d$$boJ&(- gE>]&O:g5}f]㩖Ìs0a$#H80-xJnfJJpp^.V@i6(҄Hƀ8I1 3-gZChM`_z1v M lK#}52SX1"O;UD '{#h|< B?{WƑ fdY"5㉰w#Fhu0A@Y@@T+$ nt]Y_Ve}ɝ'JJ2)%"ȎLqBbPˏ{Is=M< %g&"bDB)DI@eV1L-*i| 9GZQp]1x/y!vL%ݯ31BD Xb 8sD^ ѓNgɺ],"u*G@ESpJ>K)z$"Ry! mB;ɽq *Hn4Ur1cGp1RnOc}LKPj Nf maM8:%teG*d޹**$TF #0B|kBož;.h.,~'<)K֘!,АS Y˵i.*`;c F8BdiSMj Pyhv;OwPt;yƂr~g kHt,;si'c{?oX.ಜ=t/w:dޘwxc|<}G,|+x.P k2P 0i4ZRRQM ϻ,l_Ⰲ^_\P,x#+5W9XEtP@ "-9Ņsa=76AƤBN3⩣O9ZL*h'GQ!Yo6׸o=f()nBR2㽕ƛA~ 'q؞ K2UZgqւ4K:kB֞w: 2L#&}:o JA*ƭf2jK ^H I\kn-OG9FZI*|lu=*{7؅hPzۃ>4F.f(>ST t^ D{V: DbQ 䴳FFC)VR9XV[Ј=)ZcK43.$OJ}v ^^qj ;Yt[lMާdLn+Z@%PMyr)_Q"8AEY9H M1Zptdizx#pT -aAQW5 H k5Bx1Eq(EbhruPHY"!(Oڄ[#g9 bs5ouIC|wê7o$LFQk] :Ok孱;3@N[^=wv\1Rone^ƪM"U6턨zt>e|w&Od;ߍ7lono On5ovw;;Twy4v [^<]W`Es٣9hܕ~|ϷKP [7 Y7WE-Dik`wr*H4TRv^EBGʭJK>xΓTd )XdJK:sNI)mF@ !lD:υfB)=e ;g","0 u)yRqa.G0?Q .e'GQl(S<;OQLyT: @1JE` *߻:oI\ϔL),$m@xU4ZT(y(sTE@^~Qw%? 1 t҄p&"qY`Q)[V"]HT T\KoZ?}+/55H8mX#R "P.X8^m,(*j@s[T3 Ubct { Y^r_U,[t~xGw33_oP9vgz>qIbf]eVjy.M_>tl?H^t=+?տ_ &3#13C; U Ncgők_#;pF5:ya0_n٥Sho\h,L57_K-T S(W=,5no"KPs~/bX/*uֹoV͛⥊2Є's`),rMg,oޏ0~y 8Qm(;Km>ݙ<z_;?7ލ-6̍^jҷ"Q?z7rCwn/BJb$Ki*5#(,PiCj={1~۬GevoM͍cdsF6VP(obęC_}7AdqNhg0O^aK%u߽Bq ~}ſoL_ۻ?/ɦ**909}uעqvfEs ;yVO;|{H&7@},[;]?rkc׻ofqOp5#fu?t+XmTSλ=~V_R76!}1iTB7 ;q![=>*^XݛG~i6]qs-~ф1ڊ#@'F5^CBЎ~ n5oQpQq!DoQdJJD&E QxAi^KrOj{I;>.hB̛*h2D)ފ+G9DbLvQWmg]itZXZI~iQ. ++IP&s*FVƆT1bM)T{Du0N_|Zږmm+fkkW^oxbm6< }DVy@X/i9JXc9W9]֐JW:_)!M,-_Y{=u~B&K*yPt6J!(2%H $A{آ+r{#=puȡ,+a,KcsKtݿiMI |>Zy i0ll5͘Pn3# B JeUzj"wLh_pso{oX[V oI=~9J(}nX#ǶaTXMs*>]}Ӕe-!h8Hsx0May)&e@'PB[#Ke^6օ9 h3/ $m{59mUw%#WQ+[$ٓ $&5:4'.o,~!vwhb73tG3Lz}7@"4-6X;t ;xjz`@M\~Cy:09aL TN1*R>yrk=րL+N)_tS/Q#TR$HgA-Gp7qL*D›zŪ]Eί*1'/B Kv9mTQƉJMK*pu<#NxA50*@kTdhR2JRɡ(,N:$N*$X1$R*&4R-clQJHڏڏ%]Uܑ|mIwz/h \x+` ;c#9Y\*/Iz/D|D[t ?fe⤝&QBÒzt!p꤀n "Eę5fr#b]H" :*kKh؃%VZ{E:f4%D*@8pjT,eEry QFI{ gKX"ziOM;i ۃR̤vK'3hSTANrp*fIjWMX&/z"|*EHՆWUAC@[iˡ-m`2ؒY+{(9Pxf7pq&Yfy{jG#u ok079^{ᛗ4#vxhwV*+"B9y k 0@ OAU";EoAwѡvٲrpP쒗߶́A 7A/< |iRiap-/KpDDQ) <1TƜut^Uڍu74vm9 PRj"ȝ "tJ Rpc\m+%h^jn lXϳ WK%Ǵ3| fz(. /fЯ3# ho (S~-S^glL1g_I*rXDMe꯿L{Ə;#z^e*w~JN88+Iç7HwT\yiϟ #lFu`"umؤwMsgxzlzoyG'G<*pY5P!n}Z?p@3N`>yJ{bBс?+B "Z0s7I BPg=O9L; OkXSgɶ+Qe{g`0nfHՠf@fY! QE,~Wb[NL.coK=͖,ߌ>g`Mg6|V!͞Jbf:fٽg+3zZ3;{SW`f~|tٶA>*g)R65*^^t31Ac'Xvԡ߅7^ Mx|ӠazSqE]@םue~ڙI޽vO3Wm^ёRf!PkRcnU5]vNfŌtn;LiP_-8g>@m{J.`{Н/i.ڼ#ѐ2uAH2lӷDDw"(ZTҢM녴x^CHzH-mNm^sE9^K"y6Ղz-RsFj';f='oY ZP,An:w^.W6_XTheeͳ67kiUz^K; X ^ x3cVynCc(|l7: 0$'Ll.dbZT1d"#4Vӷ ia7½})}x X"i<1CoG_0PB /_f?ga8\4ǥi\G Prf20956H|M>^,BY,_ {Z'~=W{RQH?1¶*xv7Q pRFKBٯ?Uۙ O;$+I  ZO$wg0JOk/UԻ& '+>s]NO͘]*+JUgȔrBG,&_*e/le[v-X;P?I֑> B^fgCr6aKCerzj7eѴ| wT)槄&@STyD2FSx'@no=§t*u:ëi:S3&&s6g;~zstz0+JXp2}Or%;o*R9S4BLG{ fDjHC_ߕ+2 i\p\1۔?j%^[>֣: > Be{rj5U=hI,:d-S&{}NZH :$AK:Vi $Peg}淡h51k<|vUk^kR1W.k(zݙyA'AQ2 /yN.OLe[oZX1c2b=6MVHKD."J` ӳ]SGJDW1LF cYL%a 3hTKntI /,Q7/0.0 |# ɐr]AZug;qN9j=52T.B'Kbкl1[= QߐĘ}ͯ&T2 G v 7=LGm7`1g&Z2v֝풱CirNˏʏb۴@ L˛ELz$ d_UxR9c 'H ЬpXy*Tta\~=<V!!: Mz ",iPx%LR#1s) kpsZia`yVC)4\"jU"\O[Wn[nY[D7L>*Xu@& 9P>Tهw9zB%g?𧋣J FY; ZDl ha%R0#4'(xAcx'Ñx^Ym,Я~{s=?rCY1b҄$|lx!{9!OMfl Ouң'BI.;G3VJl1uIY *g9sȣ fHzHFyN;(^buė)bي:<%hϭB CkNc% VAB)Ӱ$\.%6!`BL1jv)S5]Jy>%wkYa=ueLΟQ]Gԓ+j/=4no67QhZi8peY.s~ wVt +)FI[?b͈taYe5(0gSBFn9$xB)u E]ؕTxGDJNcRɜRa% L%1#AyϢsǂOU)j: =.1k"f;**(Yo #{<1IVbXqT {OYzj,Ԇ@mͣgi@*Jt(BS09BSYb55fIE=#uH$l %`%5cj$Z܀9iCf.Tx} Odzu d_eiʠr9.4C+1d?+=5\8`H8Eh*Q bC2Rc@rL!+ :RLݱLµO @սٿJ`PUZQlBMqTa}w’;+w^eN9OejN,>W)8Sb^>U+닜.|եOLMBJj-9>_2-z:+O.FXYvzY]8#2̵r5rn^mI G_n&|ra!э-1fHs3y@ubyLpTȧQYr%ipp7ɢͫ Zmkr.Z00h\$T)|QnR4[ay;sW7@qt͏ow_}w:?ݏAf`Zi&{wt@}^?ҶinYTMS娫 .f!7{}8[a܂(rп~7;OZ ֬UOW^DE+)1]56b&s9Ҍ݊7s9Pݾh-=vWZj(F޴:K0:΀(Oy&2K;+à z#(wRoc3L难=?A(_ _%Z"-8732y̪n Zt ZXkyC x&sr`Mh.y xH:BkXSMqI1vY4a#A29HysQ >8 ,a L:1[]ڥ[!K)P֬O 3h re߆I. |wU7-}6 ?G &}d3=B۩#Җ:9X+׭\Qe {!-v;GZ~8)V& i1 7P*S( ʕ.\hX4R]fPwnldRN( 0RQ 8`狠%7<SIz80$Hm:QhPʍ"VFuLi'YqGQ\$JH!c\g /Ig\rлb4䩸c|nm=G&G1bj:TU*kp0pS>lov[f5.U?bۃ ffh9lN/gjG `]M_`EПF%P\9iE28Tm ?Ry_͵z(dub~_< y̪9`Wt8~ # T2j: uI@<96qDLDYi#gYfc T<N PuyMɧdt.ZqH)cJ\d#҄(rJEa*j9]k/~%vx;Źuzw[ i,|_eѕY`fL=XS/9qiF/w{8wSY p 4jqG&݌9R?~Bnz/f~||%fi9b歑8[_=tcdV87pyP_iamRƣSN%gz|tH%eK-tptzEC&o>-ߘLWV]dYU9ZMn/~y^O"mtpP_^P 0\U'łBj4IӊkCh76y&ri{6BoCG6e\ [J!\_~˟8Z.z)'*Fq3Ѩ&,/ K_嘭"e$6vIGnȭ gמ61tœvoVg$*@TQ˨NEm喨յB r9 rN(F * N:c0IAVYY>ֻĐ@@U޶KZ1N%Ag溑 MC|ؔ6Ju!J glj~Ym=Щ\hLɫ4E ;(:;PA#P'^|zS\ xȥA1 (%4XrұL)1,g'AUȤ\H}SdY7Ye:8rp΅,-} k,vU"B9otƇפ8CCO=u =klBԧӓm7vnRMjIm7QP*nFݤFnRMjIm7&ݤA}@2WJ^>1χ?ͼ 2#p֡)4H`ip}.vr܄ J6[V v%Ȥ;a̙&,1dbUegyv %2biyl& z6ߏ;6S TǟnQOIA$ Ppnг M6\4MU=ET+gy0阩Y>SsdAJC-XT1iprW֚<9NsTRGQ"g"ZJ>S2CcЌbѠ(KtQ;mJ>%p̡+%+"*.1\Ef1\}&R(=h4VVeYm9#=Ǖn ^|ؓk~m$*/omOgGLxk֪Xht?g!ȃ=2`O}wܛMv YNu g pÍn5b7q >}̎>i=3? Qq=P>7'Oc\Y ɏL<.OEgn)uJ-lDs}cK:I2^},4=|v]-ϭͅ;mX_3]iyl琾Ei68a :-L w[/υ3.0!dԬlRWnVziuդw<d2_(^IPƞV\:wHAC,x!YjX9ă\йPz^uUҜ&S{.NҚ)g6K͜版WJECrT΀g/lǫPMڢmLjzSOTf15Xi-: a{cd K:)RAj3ahtX>j=pvNhCW&kUm S=}a@~2 DDt*V##^x[R/{k "k]58NV>8ES'+v1%3 #gy ,ܚ$&8 yr5uˬ#Bk{|H] ȩsGVQ9i;󥩉C!r( RpL3 !cS@G5 9P7AŰ?_tM_q8.vHh& 6ѤBx$C7iATFA,dI)SL>DPʶ"'3>@i1O *H:V^o,,X>B їnΜFrZγdwQƠBv&FnKC0tۉ3"ĥ} mbv_}KW'E?=b1]Moi~qE!%qu!Z~wר{w{hWY_sP~/Dۿ \"fvay~cۘօ!Ǫ38'fU^Hgf:7 J{Ra0=r: dFnZvD\6P֜|zj dYIeˡ_uQ_Rj$Yֺ:dMV(`#\җ2kU[MD3 I##']&ޙ\?>Ci$b^bZDŽ-^f`..@Lū{«T(BвP,TBoF7`p8L|beT$Bzڷj}w˃]]-R64x):*5A8YD%ǽ")έ:%%VsTQx٪UD:υFh2FG`OpUr. о MɞK5g <D$-\iÍN;^9?NqJw戕4IspQ;*Fb-22@瓜9|EEc;/L.gJFNf JJSt*6F]DD%YۙEW bo9*ǣ 뚰n3q>B'Mg"Fis:fe. j!RkM{L6f$oh Ctr?Ppt09[]\z[/8~:G;!gu_¨7zk}_ݛ]C }vWzc-sji?0.v%HO[W8;c It> O:z ~dyBp'<ي@O.?O7uNtwO]t]}P+OՌ`oϷ>(87-w{k7հ?f7g{_P/oϷ/?~y_^ 4̷݄ߧFg =WohnԣyGs [<󒽞ϥ@JvGs[g& ~?r^'>'_,v+?yjb^ B `QO[O9,DVjl? ]^,H7>h޿+&&}Ϯu4}WZj(%vڛfeXu\h.C;k+ \xe؏|m4:+Ss;t5| !zC,ED4HlR4Q4=tddVԿ=#I9qA{b^T&F!"WNqjx+1TGU؜x& t^tN\Z1CY΅?zڵvFi LXtD㢶0iU5u+9Zw{:LYxx> :M@ $p*st /FDƉuo&n枛=B!=zJmwF{.Z6Iui4KrڭrwNJP36G31 F 4Ƀ0I(OJTL2[)B%4N8$&vHW_XOy.=^ie^-sDV()5; D锨ăCr1ZPji wCn njd%|vTS*W''13Ro<| pthy8)+*«qX8ECdVR[->@5H- !1438 gRXfqyJ@ dZ_xC8=zwf|3H͂t-s"k>tj>~GOJpE'  מ%@ (OA oᝣ iKnq!2xX] h߀oSҿa[7T@j߬4zBOi|jm_0lq#x4]p }/F%r@\M 7&У{{19qYf]x^}XrL3Î-̝Kvd=>\{6akm_l{4 WjV^ʫYy5+fլWjd;s+fլWjV^ʫY*f>_y5+fWjV^ʫYy5+fլWjV^ʫYy5+fլաWjV^ʫYy5k\+ Zӝ =dI&0è8N +xŝei8kdclHMhb [yEO州/ xw6)7"$BHk.qQrBR t!?&C4-!~ ^~564~؟SNٻ\(^vӋ?K/{} Da`k}6~"`6`jȘ{s-0IR-^yG`4J$|сPAkǽu\7 UEl< I8 x8b$FPRcY'n O#ogR+YߜG_/'?.[`R &N &dHv$Z#E!=h,ˡ` }@afזn5@7. ǻ1z\:/~.G=;'h|sOd~Sv0k23dWvpۄ Wn žKحLu`\9I{˷9z|;DNzڨ |^dϣ)v]9C"e`T5rL87P;ǥ$?L*h@@A%)PbT̥zz,W:%RlIx ʰL@% E2>9-h+D<ʍb\p!ql ѾtIR͹]DɸM8U;Υ^.˶b~VWm?w/5_;q;`Weq99=oWo9?w˝r,n4YLA?X\QFm&H%h*fU"s<]C ΃5*d4)V)S'7B2N AR-lB%UZ3kq3Jً.,63 dtpp٢ܷyN(27;y`r1{g4v0}4vU` kf#9Yh\*/IzbDa |DVOv%&*!aIHeӂC(˄Q0Hdi/%LG I4ZGem 2VeɘєuIRѶ^ʂ*(n*iUҪRҞ@(pu}CGhO[U}G$mQvFI*JŗE& yҖ%''%"ZB#X)E$H(tJjZIEZXcNQ|l]٦9Ց9Ur7Жhc^N9'6˕V]2aTE( 30't*ٜFm"z{0iX.8dCQ>rѿ9C(Ϳa^]g3KC#KDٜnZ2|.p]uE '4::`SzL-Ԧ1ym!aJAV%LeDE%B'hYU_b,4Aա6'2^%_?V踂z{pX*˶#;cm)̞^lQ"c6ܤ` r20dÍ. GZ1= ,j69Ր H $48M$Ebn [!yN3_7 f fvã+ۥU&ݵ@hS?G&R1~ut7񼜗 >^{q֯f@Iny<~߶x0Z><+|ߟ[05Xs>mǼYqlqs}(Itfxy)FO;]5t_Nj*G{1{rҐgr)\N\@_Ո|w<9x97>g'7qaS4]V>}f}3p|'Q`t(7-9ضX@wNWGzW:e?m(:[PEGlFN(!=%sd9 g=J :=`BBqe /?4L+Xz[TSgְ}=L-WDQ0''\B(_18PJc!`S>P;i#>{Z M_]3OZS:BA*rZ _Mei/x]ËKb1:I"ա >NLvP$/KBE/}п FiT"HlYL}Qxf4DXRG|Nh]3+`a8"E` UZjFc-t4;wlTrJk'>A=1$QڝGBˍ}}t>~b+0)G@}e dX=& J)/Zt[0h9v<Q⭓pIfTFc$M&/ȤH!YaRSS&Cϛ_[jml67 ?4m6Ad IX3lvV"p!ZLfm{F ޙ4>++DB(dJa5RuD3r6KR6,YK/ܧ5\W67;͂Pn9 ;ʺbQ@B3ArA"3O5mEf/n5|}uq/B~(̫;Η78#vy$9Gj2sCvAZCAed4!*MR[NX)ERh b=[͎R:vViCJr"aVly AFI9"IOIyaBJB"±`)2R,3U;ͦ 6  ]P֜cI R@ 2#j _!'A;'sbF:.*w181XsR@h.&dT.T,\ jr2+E4QƎH&9ֲ#;}QMV\ Z*~XLRk%v8tгAϟ>b{uO~+ӿ^s&w|7V, 5rc &on>b0gՆ>swRUr#c`u]$=^|(Vh-6~^< Ґ-,P/eEx32=+IQiSf1)3BE8Ȱ=}@nOG?|E?0B/T`X&d̪CW-ݲ0?1tP٥ꏨq5{r߹\Ng#?~d)|;)*'<1w s7Fw^O2 h>-)F/tz vZݴ8t G | j%ܘq*fxf{8xVR+1DUX 0جK yJQǔq>097O҇ *g]I^:~Rt iq0#tỦ ep9*3*idݭU@a%[ &~yӐǦyֱu}{wēnܿhլQTq8J=l>=v_Ȍr7-dzUó]nӮ9Qˍ \ۋέ=eVeuCwjIa1+ֈrQUwT_e'B؍k~~Z43cdc ѻ :be/ ET2RkH2 T2.<^a;}a]?}PGP1Q;#O擯t{`w~7K#Ž f*2hEp^+,)zhX+@Hڑ^HAu2@Y^BQǙ6>$cR X#;U Z*|Gx) AHBӒZϙ9T@0x⯤B1 frZ1eHZrBs76t9εBdbKb #X! 1fʧRȖh (KAT 3̅ HذՈvJ|7>SdGQ+i,+kkI6h-I'F>כ+I*UȡGXω̦$zHEQD1\qiYi+j[V]Fcײ-m=X'0ŨbTQ_<9Q^>qE r\B\Rb& 9v!C^g<2$è E)6"EiQIUk`z:Y_TX\cGRE ѳdDSZ]rcit@jX]lgfskD&o(A $>T IAw-c|ze}Ap^GKqh/tBe_j;J*A5*m@ ] ,gfC9 dFiØ3 ^ԀB2^O{0#U2gvavN:rIWŪǥ7a`ebBtJdZĔ0IauĒ.kH56ץUkT{"!(Tv#DoaL00j]mj9{qF\*A('qHҊIø?P~㏍#Łejܪ, `@[]5Wǽ٭#dgYK!Rt{/N4#L4M{A}Oi'}'nb6RQ7䀼4 #h㿊יڤMuۤEumCz5ٴ#;Ek\pw1U,Ylt{{&MmW fwL[Ю ㋯؍s=O~~wP:k)ޔ0I8#P,$Pv!#<j咅ɑv1U+.ҾՌj9djKdwx[["J=t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"Ht`- ["_0fHV'@J͉tI;|'je8^hXtT&$Y|<|(\A%$~&Hlq[4:Ͽo>+'%]|9cPd7vV'ךgJprrX}w/OoۿѤd-|ۛfɿn>X>L=}tL'uL}}燖L>|&q8t{l]Dxj~ԛM]Totv9vH/pu)W:y]:̀zU*8*&XgΈo5fth{x72wCx7Ļ! nwCx7Ļ! nwCx7Ļ! nwCx7Ļ! nwCx7Ļ! nwCx7Ļy(q n }n$KGMW»iҚ4)!;woyum/5> &Z Ѵh+CeBjٕ-]6v]{{h{ŻW||EvCjO#8V+V^Qh^^;xxU3Bw\-Ǭbދ7}-z5}ׂq=cd[UU%?E6![Zp-8-xk u]lެ}-gCLPTc[q08UQws;\F6qv7XQ'&5ňQWcumwbĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bL_Ƅz_ 0&`4¾}C Hk3& Ęx )4_!?]oCR1CD]2ˀ41U=.Đ đ2$Yl Ldˣm~P>Lm#ʕ%ŋ*T;vw@7 ՏˋTW8bgp0?v71_,wmw??`›}e84a9[*zz6rh\7SY?Sm5 F=^=,G9_fn,^qB0rXiZ3?X9`捑a [r{͇ów`AW3Bdf6%WSNYsh4BVT+3Wm2tJ6Z gs<,aRE5_\>Iw]>a=t|o ܛBig}K%o{#MG龜T{B;k:#䕎1}=L,7+ò1V\srI!zR媦?sy;Mε4ε*4өG. S#kXXK8Uj/H7 .TRr+C 2Ȯif<^48C͕7yTB\eV!&, )1W4_ԡt돟+v>?/OԂDF,5`Ɋ9W+,F* c򋢳g9l0mrj;q &;; Lh U.Gr+32*Q6Ys](yUuL֎vMl3[?ݡm !& ؊V#/xsU k#3 A9hF bd88zyM'Nv~y'SAi*(2"˥RŒ[WnMQ\ǐ5=D}*C-8"5>0r2cSp. !"wxNU* 22(cagd&cnXÿLJ_-p~ڃp" !7oۚ7^Nn@d!6fVwl)y>αijX: 鬤Ots|=5\_]S|*dyV9Y(Ru=nzzᔹCVY|K]*|y1Cx 'o vqOuELVw.frrKN_c9iY\c\|sS''wE]oGW]GGuW㐵qpI.5~:SBRW=CRԃzq(, Uz\M}37> _K0V?7 +a!yg^ӍcC.OZ]D-I7l5jVA@*dfA[O(j%0t2Tx)B*'bTQ!cȾBM X֔,{mjbYjzƥI0mUK]xDkO}%hG{xr VlEƹ`"U%>E2mp&ZFIyW,Z@l%]&@۩0Ra$p6rAҮ_i!+9JٟR*t@y^4R6Bi2ϴ L3/X?2k[}$=]RJ,BU}V~֦ds&I, 3KrEf1Z\ :+U^[G'U~Y}28#̖R&gDn]Y%D#V}0i51E!l$x+&' < 1WHa&9:*`z}Սηu ?-)*HOdt^+.j`ݞN7nޙ~S}h<[Kr]+&VީhhvuTE gd.!,&i7TB1fOr %$zxFa֋R495qeRmKȹ[2ގR6" [[‱edړt+OΊ˒`fn8L1H@ǘDԆꚱyMRuH:i2Hm! HyhǖtqZi䴝50UIW?4 ^JY霕6)hYq$VǴs\Wܓւh VP%I6 x=iI [d+#wG87[ ZO9K*Bgt*) X O$sUNM9br}*M:s/z8.iy>,/gnt"3{lJ~~n.~Uzw~ f5kh0n5{ȕ2={5 Pף;椞I.zS// Q3\P0[*Qqk* UA*cbJ[NJ6JFZO p=.bau=e'(HS,J٨9Zms:ߢb<ےOã%r6Gڣl>b`܂@мDJ4 L="YO4w1sTv#|>j>Dۆ*GR1(J3B%0rUg%. o#6uR#Hhom/y y~V,xޅOŪQԁO޿yN^.4)4[& vYZ_9G74Roj~/O .A%gV~l@Eň{ܐ"PLJwPKesq:^jzo,9UUfX.dI7Kɍ4n>B>Aqb=G#>T{"}ByK`$OOpwM5b&F3'>'*}EwOӧ{D>֦_;ߝM>N/^f׉31'}=fl#Qo0t63q7~Ɩ$Җ@ɲff-.Ĵ1 |,Xv>E'Ǔ:wN[urY kHiXpʫƩ7̽"6  yn{5{AsO?r ÷߼{[~?0h $U{ VL¯`q1oY=^OvysDV{},[+a $Og_7PbygmAu95#TNiOWlb\xܛ奪U9̩ '!F|(1YL <_f~[ήR}* WZj0g2;K09Ì;P]e lJ&,fUdHЊ-5ԏ߉y+tLCL 1E*YdZE'=xΓ G)tOm+"QoWe=`hI''GIbf)}wVUmmM|#ls txހ;͸!6PQT_m8^ImfWମZ StMH [[~u}w}u?ю[l;!eQ`DXi=G rIŃ]אTP6;NrAŖ3/HZڏL>cg&R|H 9_$ˢ< Zn NjUH\֥ȲR3ZK^@cB#/P#Ylk-r$Kz zWLa=u']*e}}|ts3(ɭݡ8o"pw⼉\ŮyZy8m2wH\ZWDA+*+7#=^15|l7-D.rܬz`6#F%WJvSo,lroi ׹j=F#阆 iY}*X ܃Ap0Ӭ+O4re'}RPul gZ) Ti??_o.8AġG9uv "陗S4=qԲ2oQ;$t ,%kHTsD.Iz[qڍ/Kb_q!\AVΗ J*qf+PJ,0d7+}}7g3{̕ffv3 'FTp0VYI- IPŢ8Ù<&=}&@Ab-y*DRڭse&nduU }#9-.=Xwr-7S7sMn◓Zލ;ӝ͗bc6%?B0ڝ*+~BDe~BJsBڮMx<߂.Ip2L.Ҝ U:]n&ZA%DQGX!j3 - "wEɼ4+SzCp,u0ey,qK`6"JZ92Pws.G2q4ܲ- <|B7{*GσZ˙ f>yjl,>R>c+c|e)iٻ6$W6;%}gzn,01K"eodIHJJTFnRyU_DFF0A,!G&JO c`J{1oowV87rU0x) ]ql;H?\dx~= tΛ^}ɟ2Kz ~ Dϥ R`i"tJFXBS@v;9vc~e2Zy~*f۶;2ËCkn?0ҁ #l\:JDuBC'ZhHp}zFG ,h`;A4Q; Ijq)/*1ub6.xpR!"e`u()$NiTaW`>o!k8݌mi!ooCQ=k-{t8_}WOuagp}gv쎮gZxG7O׷tlInZ=_c_@Bۻν~5N1^{uܺ=nvf!e5ݶzoY=/NhK 5Ch!wY>xd;:]s,\Z 4 Ҝ/tߋ:- 86/yBqw^RL:d}rl#RSS>p;1fKXꄀ*(2?ב1퍝V ,whԟs2e/~P FY 65v૛]f̿4g< W9r3C M>5&A)L DL"$傩DU `x&H>`wD+[H*>o&va9BmSLڍen=߇l\[e%YM7'7 }c3y'50ptkRȬ-v[gY^&˗w3;xfYwWGf)^)&|i˙D'"hD;GX%#\sMRFY$$˖BF[+4[ 2h]4b, K PSTmCRȥ*22k̤sD~'yRg< ;}U6\R;Wy!$tRb.pjNȻJ(wTv[sܬy9ę3RS͕Q PɃ2Za} pY@ u5LYB#5"r҄%,3QkKPxy1jt䳆xl6?TB9D*3F:6Q BQD˨ "'(6"IP!c,>gz9}d4'HӔ,w8kU2ϐʵM))怬"ZŠ :;{1Σ&,٘8#JET$3f$)<he} #("8N*֠M<1ր5x7 W@Mmr`.P eZE)(oLPhy'`PYŵ8V9 _t}*WD|0+Ia&J{0"J^ricQcCD48B(A8icO;QSuV I4Yܪrhgǥr Qr.68qqW`Esj|2 o!d|3HSlВz"'Z 1 Eޥ})QF1z`Xb8ME-w"7/|2*X3+kt~\ٽ%5RVy* Ց z9t0o07Lluܗ8`( (O U3W8VNyԓ !?eg>{e(7G8ʍH84 z2 1JDl $d_Eƥs [$)s$IPDB,=#:j1>14eZiS;ŦE.NwAFKںٕOS3eϝ`h$Y vHG(tP r#geۮ R(r* ReJ@\#u0+H$>т^2-93L(f S1*x"$GN!(B&=-iRބViJ XpAy\T=D-9GVI$~SX,r4BQj0x#QDwJo=Eȇƚc\.284PV20-6}*5rxlr}3KYxr 9 u[OcKXƮip2* ]-3%2ꏅ͚JXyU%T"j%]bb;rڲGǤSK@W"ɂ iXd %:o|sѤCtJGʈ(/(45)c$p 9] ^eT!2i5w)XD[o͂OϻAe.[^JpQviNIVr =bÄT9/UJߨf;$,n!$hα:ku70 iFS;y E2+4j!GD@FZAu'1cCPa,^2>D,Az!M2p&#q`|f)KV"2%QvRM0 2Eb4`ǯ 4ͽ|(:,d3&H*ʎ2Sr {+똔be5{527hCq\ s|[yKW!?|Ͻ+|_coʭ \9RY{?n0q" s%qF 1E]rv3xwgYjlQ$@o'$:"Ӟh7O-,n}<<}_{B =ҝwhHӢym?H=㳯*ʄ!>^py3ܿqvz G_è6$/5Zͳl\u>yvr520{8.)a\Ï0 Wӽ~oDh4 o8I6# yaX0}A6ᨑOy w_''QlmBQ|<H>}QeqNho4=_:5b[ i=߿Br?rOtq;0>EneϞ?3,547М\lwK)qK6l/Tn@v0_\wnZ9NZszQ^ NzZܧ_fV[T,fC#B biTB|?瓸ǤGZ^>z x-cjno~;MYO@ hnE$Cl@F$xbe penuԏu4 Rz|pQ BQdJZF'QzI ޿O}x9qm\a< 1(MF!"NsPkx ֎r1"P'MO9z ì6`.HV2 O$oo֧^7Ҩ{k)WqqR)6s*VUB@BtHՍכv,]֝YX]u:6_YReeG2 -BGZaXOog j.  |F5TT6HirK9֥;Քc!%2ԤT XrOmIO:y y0ś%<}[K*/}9'`Yy#0uFK-ylgZj|[6n :8Dž"`!Y8erí\ Rp`B :,I^998\*j+>M3ad\IP& Ià~5ר+ݻ5_ ëh4TB$Qْ $& 4%9wp~=!~QJ;uC.7}D^#G5-ߟiu=9Y#Шd׭6.xpR!"e`uN$(RtJ;U[whw-͍88Y$ne1.Ol eeJ(l%E!,?;9PhL#`H8EIRU(뭏*zx ؖ;2@jMi(2fl7&WbQtmj<#)4Ǖg_4ӓCg̡Un㟳a΁:9e\6vt;Y=s\z=Orh;xQg I(& B0H4TGv$@hXc*.sRcNVZ4 ;֞8=vr_L3򅫊||Q}Jzo5ygtKL~NU7htۓdžu1x/f!Τb]1z Ec[(۶,tAiUݦ- KX8JNl]R딳V&}ʹP-[z9Ql(ͩZJa/*Ub%Uq}:fgL2Fl*rJ(IYDh8Z"k+"#P::wrwwU]}DwR[{6g Vi]U)צ]==|/]L`:u iՋuDeR*/w/pW0z vpi݂/k[ mQN03_]v:ox4Gp1  Ҡ߷t8sFn`MWq "MW)F17qg䮪]Uq]Ui} w&J ISSJ{o?/I ߟ>U·sZPz"%S4I4b-yZ1PJz K+kR*alqӭ9K e*>~<B):xÎHR9v a .*|aW2dr^DR;~"J)Д zZe=H|r13k0 KuA+Y)oٙ4ԧz WZ M9={*\hJd ]OZMZȦG$Bі=4`."Ty݌t Q+G9Z*鮁VŃ5{A3qvgfO!?h td唷rs“:I{aܣ@A B YV%Z<35Lg yD%8%y0'f# xv9 YͬX M ZM> Kl;//6tZ}Q ZK&Y eRhuy9q@,kS{,KH8+}SĬ׻>ul ?@A=ch,q8׻ S&-#7%ù܅,y갶d➲6uM0{P @l4BFTdHg(Mv(Mpbq^@W1zy2..]/fnW^ȭMn"ў̳ٓ}/&p1?z! ,yuץp4=gwe%Cubg = GYe vK9Z N<6~敞L/{,^6d)T[7KTW6 ]]`X`a0Eփn`ol CM<wj1ԩtP/R[[B6m[jӋFO =Zַw$QWU%;  m  *L iSXvQ$kK3Xn3r-xK/LEG ?3(Q[:*cdt2d.f/=j3;g]TZY'$+r U@9E QCPڐN7fig3rSi֔7}L@)Y cMـXG3 9:\cpqNя6&ݬTo/F.ݘ/75Kn0Ɲ&7qV 0]8Yq/9~gt|7* S\B#c]QYuE#(_Bi\MQDLΗC},H(̓nˢCFP[ 1SX&gĔW=ҳw@ך19k;ڽi֩tT'T٣A*~e짷iUIG<.y|i?yvHwEE7+%!A}nN&& ZKXM# 04DJdJ![fֳ"ǦKd6>jsŘ\i &*ebQ! +(ti}F8{Ũb-mőޗiN0gHIp>'>(Ș>~Y`QS=z}A nZ 19)`+ RtX9B⁐)%\mQ$:E=) )R҉,PSx4bz@J<3u\.lF!7iL> ٧޷Ҥ۝s,XbP㱊lN#$($ kp RG'M IɢyOٙɻN$JQ)-J ۂY+'qٔv5n}>n+D%>]L?/g&ndҏ!GR D:w5tYOq!B!WpT!(·d#GD![hfSNA _g &@)5xR$(N2KD%C>ւ&yx0)pf=x9Y]f(6}[84ϣ0klPPQk{?ƺ0!'`s4jllTCux8 ԅmW0 cd*n;zٲ>֮LQ}2ng5K&`*+%Pu>f*}aEt=† 1GsJ :\ ^M)+ɒH[+qܰYyyW'(?^۫ A;O/:l@-ECu*H #cR`C7 bh.9$k/tq}Rsu9}m mVȡv"$Ae!o-,gCVjvwR]/{="|%}\cGYLCdRLsXgU!KE1Xd$?ymiCi;S^(4<4sHu*8 `Hm 鐚͞uʘugu*jT]q8Ӌ֛fbgˁ%yzr u|%/|oo?ɯbU[/-OXf%/Qi]< $פ2'Iԅ)dR}j-u'-~~L?{W opOFLFLƼl#LA):uŠ 8LI2S_J?YfͿ' kUD%d!(Ȉg=..GX~!.~#x|j [/ gkKwn.w"}caӬ w;㳗:L>W>Z#;4A 'qE:q]}QulԿӬѴybLǛ ٷh+Zsltr:[vih|1溿4o1g4I*|?|cZV_hXW|'w ή:䪧1~驓zAs>ʵ ]6yt1=?4u:։v6O>ʣSV?;ǿS돟Tǿ|?'k.*zx?gy|eGf[=7fGkrԳ \iD!x7cs`vKno@Zv?IBO=p[5g`TxM>lSr(ϺCf!U>ޅ[=,j|@jOz=W/]=熿| xbVg8?u?v?BpVLRX d4 "Z!h}Tr & K_Q~(t r_/R腳KJLlн?^҇]85!+z Y$G5Sjd5 IjELi8jruP5cW_k#pȵWVx5U\^ o6PIQZ=Fkޓnne?χ/P}1@=AL#lϟt~so/5^q1D5nf 2zKCtR6nnls;}MAWyvBwVi}5v*LҊ:3EZ~?ضn =R$U]ƥQ-$/!?I^:>\jQjCBpQ7טtKN-8_D1tddPP 9p@3D2&x<ϯ!䯧^:jۣow UnjsPr)HH0CJQCR}-$)D &rª7hoaɇL WQ^h޳1'ƗȰe(D^RK_BVqhM޷$#|T&tl%f!*V!2Il7^i[-kM.dQDȍ隉D'% WP4ӓ'YV-اV)__qW\.¢o=&uqͿ{zNj?#ᅱ ^yrnp)Dt9;iyVz6^D9`b)TA4.XGvxXTz(עKL 0b:4Hi vR=c3q6{(e/f<.*q; خoj"M6W?NUɟ6^c̺@N`xR(.GbPQ4C~qK זR] OC))gg3UD_8!Lt%ZgNc\ .k7ӎzmk?}fҢTXR5M.`#F_\q!xZg֢-eSOL4E3VkDvTȎ#B 8a\WJG3GlOGTGymO7vw%NR>^c۰DoOwedcoqgչҤ2#tVIٙ}AI)C~ ;ۢXI&ܒU!fnET$"qtUVR(+ MA_.D#|>T`6ԚRTg˞n~:m'VP+vf &v1eOGbe^d !Qىzw bԨ)f2Mb9E(HȬKG؉G}ip cx‚Uޒ@RSC.h%Y;lQ_setqh"P2ZY6NvO¬U#!f@n&z <ӫ߫ |Iz-6oafP/+3͋M޸q{PLI5YW**fHy%%J͋!L,IgyT( \9ozo>>7%vBB778fi2`Vyt3 Oؔf`WQk*Ψѹb+Ui R%OgTy6_p1._h)>i- JB4R{'ȏ;Wu{"aN(EKf-D~[eLlNa;lN7S%Sf%m):=U+BC{ DjBΛr 8M(r "HrbrX+?"D4 6'R`Toʉ>P~oBAnXn˕`k=ϥq9e㊡켗 ^cTZ`BĒcq ia '.TPĩqBt\D#5 z LHB""(Nـ5 " jkI!eR!1G*2/a]8 ^b ͗v4ףqEM]~Q\PoVZ+`QϥeаVNZ N2.e9ݪ*Q 3 nqr5 i$@)WW̗LhS*CY(A&>L-ϾӄԇeBf-x7!cd^8l:$X[¥9O'L@P[\GC4 \y̝v kPƽSMf/h @ g\>-_GW`83ǫ G ^H^?6=Z0vԾl ūOWT&s0_5 + kmyB| 8LIQ GJIՕK%R\d#6w4 ǏΗWדf;vN_;GVL_;CU杽ӧB(kB)0f tP[ _N6~uvh-Wojv[N[e,]ʠ4W<jX2tie$LF mxc~Mn(kRqXt3 Io79:޼ŽR^@#J~|ݩ\(jS^_-BU}lB[__eQ~rgt°JDWɃ;#ƫI*U Oiqӟ)?):n^Wc =BKw/Z.K;XGia?V0#*) _ |,S#ɲǞ ao^7CtHGg{|.qHTmK },ߥm4݆5E"T{@\){f ^1KW3d|>>^-CVw\] Nz6Mz)dK䝍=J(]8k3F?*9rѪ|fͬQ&zt| 7sǚ}9>WfU\}N6yuxKH-=)P'|i tCPN 2?jMȤE8&JeZDa2n 2_ sr&6;BgR;N6% ; aINiuFQą"`6Ζj|pEz̴|PYNj()iF{n$bFK)IqA(2(j2pPhޒeHXG"\JlB  ,Vi\ | \KLR5,w\0sJ,6ϏeEs A'JQRb`"P )< $ab):]sIZJ׽NjLԩ(E RrHd 3âҌ;d+ 8` (O X P~* ؔJ''AmuY7! b1B0g#d%61 Ip%'!mϵXiFa4E( tAQGLi$U-1:b55fIE-#emdù+Y:}NVW`>)FſiJU$$0 afOT/KU&ǩ 󰓟C1Io`JHskFt}} DT,)&+0Rt "85Iѹbp.mh8xy? U! HI,,9u븳ꨬw~((kR,>MV 9n+S j~k^}=zr^7Ą)D)cKuݰ-r#BRw;1_AO\'0-aS17EzSVGuZ=xs=p: r\皮siuz9ז껵9$*z޻Q(o!э=1˦nHs7yBև^Ov%i~{;Y]hxZdM6͕190D 9ϟ!1>EwUlUhcXW?.\GgoOm?ٛ0Qg? h`FX4yeZMsu Mm59g=^brN{L1qS[1w/ǷrjUhZӚ37 ? t 7n՟G%%fQXO"f8#./$Ϝo.Zٹ+e5aȄؾLInzqaɿ:3X2`ePҔEV%Q)0hZK=8_;IxB -aE6tu[,|D^ 5*-_iktf B1[[u2LwZ D佖)HFs+%"Yr roWj 0%·HDW1ƤɨybؘR_GLDC\r+zM XSDsCvYI( Pr g=1U@Aҫfp[&z(?d؟Yh#o7V 5,ݾC@ٚҁS8!-y\J *PFXpN`q,C e=U`d3ZBY%P2z-Xv܀P>RL@*K%bdF/cL![2fzɸ<,F[BrPfY[YWY8Q:Y/R*>vFa/ߤڼ߹fh5<0iL"pbD( BG΅1Dc  9E),C'ZK(Ib yI=tL؉i]"Un\Y/mgC٨c[sJi{nl;4IT1k cd. qF)i.h=g h*XȀ 1+YrR@dR*fن^VF}J5HlD$%"i%b+WQrxLrO*Ǚz?Uϗ>T" /ydHK0 Cbml5oT>m(cy }2ìL;$5VGC#wNO-d҆p9oL⬆CQTA%iㅑJmc=A!%"'^q`" lru&_x=)isA@$µ @)g1L e0Ds|wzr@ C`-P78*-vޕ6q#2/k3ƥ*Hdodue]*2)-e"%9IyTt7n4 )νZH J !PfPB.<#~R|-XT's{ǜf 5:%b a1Έ2?'kվV7ecEk2VaBNXc@'%X.%:52G2֋@ ہ} ~V*3ښxԸwPcyrB) _ q1-W9D RilږN-jROnX|`ʩ2nܳ+B+c&Q h'p*oN |@=;&gTcB+a9K|Z9R`:ܭ텯ڥ&l7Rdؠ]#-]jluu MT <ʶ8gKAqL3WrߨyB_oy])a{Vhqӡ$edOi*g,H'^-j0 c[H4 ̖hDh 0ܭaȒ)/  u$"J8q\s*DZA$.}b"\438 gRXfq^Ĝ1t^T퇌u!$QFGυV()5khe:%D P)1ZMGwr =m4 f`nߏ3tCf_ߗ:ߣav+ 3JC7<}Hsk~gr>ߙZߙJ/Xq1`8۹zZX]=\A(٭zF]=JԕhSwzjWA+o;uo&pc3.u'G!Ĭ(,vbnM::Z8B?8(oa y{3Hync$(^U!5E>u>CԺ¿F1_LW\Gɢ2ˮ4>5:tE2M$(H9w%X#KcC*~M3RF3RQtc|$x(vڈMٔ mt+ h޴M%wJ$F (̊hX@6)70C1}02YlZy?>42Xɠ TS(zрF}FTD+P5aw^ٯH2k! P\ ZݵTk[t-0F82jq( *S9QQWߌT[."MWHgyx~{ik`jS WTLE>ڴF( 5hYLuWAp b#1 G ku󦅓;xy!݄ (@ `3yS˾:*J4{}`X_:唄C%r %C$!ߑBI`#%!Qv0()S9ՠo%PS-uXuctc+4-^aYF|O\دC8vgͱ'v/EWvwg ڙ3âs.S|Ѭ\f??)GX](7tfr62h)]ak,* R[K%ImnZja N՘jWԟ?* \޸G)I(IV$k$A@24[e 빱1$ MM*$ <#:jt.,|2LH %\ C<\fл橒TM|KwA9ޒ7eR?f9e.?RZyyqi \*fϕs".xA50"@kTdhRfZC NX\tH2T= 7-THƱ`c)U1H*TM֌\3.F);хqZR.4.ܪ.P6s % oV-~mG5m_ۭkl')(TL$!I2C]6rle͸ pg#9YV\*/I{D|Dtݡeu\cn'Lku6SkOhxVaA)4pب.HV`TrFZYϹF $$iA-dDЊFPׄHdi/8 օ$)#k6rʨ.̊q_4bmFd5kDhF#.M3ԪH>H85I*QYz5D%U?RA?W7p ٜfN?~{W|3)#V~s .c#1U-Onè.4=C2S@vLF+9Psk/c۔{Zh*(leD8ha ǭG3N>ֵ!Q ?N>VJzF﫴g|t tɠ?+=Mk#w3ыq{w0jXeP}(f4z>]g/ՍGΦCb*1Irs7SN~=A!M;y-13%t961C6NO8y*oݿku,ꜘ9F:V@Mϻq,#anW_cMEV'vǿ;zR ^*'_ ou;TGo6~߾;LtvqG`}26&`&c~_i\D]Mc{㦹^|yH]0dɭ7?v›Vj'o8ݭ=pM.#H6d]_S#UT>UJQP/CB47b=2TP Ax?'6=/7/fb/Zje7%<;K2:<};k+ \xeثe`~Bҷm!c! 0. !LIh"ؤht4 /h Aܞ~.+Qs#{i)iYySEB2Z\9ўU[`(H)8l@ղՏ}9iTOUz35+l aVaV:ìMnb"SKj}&˃ZUp(U3{zrkcU!d;w.r“9k &+#9X|i%:1 -n޵őW gYv7N M2[nICC MR!eJ_ܠ_/IRz6Ui o+Zb DSEvZLC Jǩ I"@mk( D%TBCNr72GF=/WѠDS7OJjD*"cuEp*l 2~|0P8sP#%$P0$ɀma1>b'sUZBkmQeFj?#)<;B}YѤR E\lnTzgbS]zb〰KRd:]Ɗm>$\%*GW/%*Kɕ Z306 uM/+o]~Y#h`|zJ<~pUL12+㇛ =ݡMH=w6q4q 1X[7;z$)O \ucQg>z,;)cpS 56֟PأSE[f\--&(DX9qFsB:!x&]/trb\̒lt-}nÚmG?DqFGhHjOhsQMA+!A>PRl5\S B18qdSg~BCOXOi3l=gP[z 󁩦^k)BJ$?!%HUnм䀦@tGmi2HrHεX\w_e幾ղ^uf6A2w>}o]|bKjۑs-~\c^yTPlM;A^=j tw,Qe=o6~z {)tzϟ<WWzR;POM](3›`s- 2JAB$b@AOr[& 7쳷jL2 R P+<!z"9j{M Uw?:=}A^ ,]~XFm1B?,᫅O|1iLLXW; 2pu$3b3y*.AɳzdWnƖכK2xX={/WoGo>\#>+;涕*3'R|yh \uSF&XkD;?m aQg5b:Xh$\xÌR;M ^ RnZ7w3bs}.z qĀHO @'&viS1pPOP1Pa(Gwsb>\AIգ5 SjSdY溑#ɉ''gG앆* x-Z#S#@Tܧ1c5$pK5IJjT&Y2=t@|.AI-IuW)j2Q38!#bk2]]_Q늈o{yKV9=E0E&rvYBԄQaN0eӋQUNUGbfgCPǶ>93ahF+Bᖋ˭p.jCJ"g?~9iz]{{Ϩ Kr_1{YFߐs/>]ghJTUљ)b>V fqgD=cOHUI XAP? LRDcKҬ$c&g5NjM1-*g)Č6j]3"}Xj =a`9=[+x7ߖi[I]M|vA]W$,}m;w Z?~:{j-yefl‡o}yuܓ7.>յI|˞_u {맼CuGWOzy}tz8/{6[ۓ9?Ost÷x)Н|Cx^=ú}p{6^.2ӏWq;:ݱKз%?p9/vX웿oyCl͌:VFuj#N%yo8(zJz>&1 S@)~9(oX'X|VC/~_֒Cq|Oc|p]5X9a4K@ S,8 z_I#J2#-gi~k8?݅uWΖ ?Fu4.}+ ~4@%xcI#D/bRK!B*"do$Y朳paY[[ȃ$ R?LOrvRXC-2{(:GOў'px8%x-*wWcGL5QS͝Ī*/UDb&eog5>L?8$=O Apo d\,L tlZ=iW"8`gsNjIZ99fE Kb@u&--^&gdN~\m ׂC+ŵC'V)qa(gKu w,CD9Bo;URY* )-bT)9J5-kZl`">W=omSosU*IFWS.l%ZxRQ Pk*7糟4Iw{G|\ )kXwX16dx1D=6 9f3+!㤎=χA0'RklR` EܱYH5xK,I=$c#"Jۗ:\}ыSa,T˃o]ux !{0 Pc>/smFv V]ْw6$, 78!O,;A,a-t}v.K@tkHuBœlJ&Ioy|KᥦW p~n MQpMIr)$)STm1k2O0VvQXHNXH,8o5%/T!*NEEo0 |&I9sɑ2j͈S|B@! )L`|;CT-Di?| ޸˻ў9@ _|P3!}@̌W_ HZкPqP´:j0OqeBijPMLh9az}GMLc-z! 79">"q|CB ̮:G66Jͭd2=5ʡǦřǬ,j3 Io~0/ \L/OR@]Vd@lթlY=tPP] 'A㒡:viS|}a=q9H,!斍.Vײh]:w %9URARAVPJg{ȑW6ߏ! vfn0 E:˒GxݯzYZmʒGjvUWb0Nh*n(:-whT {K{LsDϲʻpWGhmqToDgg[1N5=ktRAx av~SyLYU G"GV*kygvU/'s!}_\UQ@vLFsj9ȍFnFcTGgFrU~BwXDt:mjrsrNjGp$sP3 ?ǁ3ǛΥHݢ~mfM۷ RE0էrU+rGSGbobu 8O8pQ8j"'S)spoiۡqufCs#O=RAJ>røG[l7̗Z8>|>fW Ū9y+̾d_LJwG0owҚ9Ujb3f|Po@%P>=$pTZhM<,">뎗x-cjNo~;uYvg@OI$Y:zZ怤P-R7@cs1h>jSւ"8Q`7B,a *1r; ?]>v~ r1 \ Y6NrwW?y7_ &ӾO4=1k(U6>? g÷9kV,@ŶEZAZv&G/Ѵi:u`W52q>TZ*HSBD_Pzss0! 'b@sC|N&INˍBGں92Υ ܺ$I [UI$% 5Ƅļi O #gG5tpd󚲜*ܧ'/~?_' 2h! &%⪇ir \\Cbӊ &:U%_i3H 7w7w{xoYjޅr8z"Oyٟ' 9(% i!S!A[.JYŜ`1%b m XLSaZ K;cjAó/زeK1X=]$LeN,/ 2%A  ՚䅳sͥf.Wքٓ .L v*eU:T!DGO Ѯ"y- 1`Рi-q wxBrQK{T̻A;pͨz5[+ߞ 9ٟ#ͨdɅ =罥BD:Y*W*f>APTrsI> O gp ơ!@NPb̧|f*23 ZqQmY>L E!d8~4X! rBL(W-3 q Üo o*1>ydм,\C>6G<ϱ5Bz5;_#O[y,ci0V)Zyq9B6z qʔ -3< l9`{8j+JftBs2vVQhS:95qy(Zwt+o(6o_}K9_5$O$$i&>w` L4~ oL*&W>ѧSK?ŋi[sĝwI Ÿq~ủQKn2=!qrQSG(jZqy޼4 /!Ux!#C]* 6])kn;YQ?&@wrcfp5z9\ws)7WDȘ_M;HaX1rk&@+ov*Fy#i G+LTR\9ɒ6w食R>i%$8AwDXl \@+jĢBJԪ$T*n߿Mo$Q!'B7L[$D|b.~YW=y+ TxN*-WZ[@S & :ipC fA"ScFJZ; I5A־j[FX5V;݋=^(%T 4.dC%DstHu(|4JHadmv'L|3>ʧ۠%d{yelpB^G<-y'd4D3TgrϦ4SH$4aN?s`BsM%=gp\b%v֦űEG{BGQ-v^ʫ}pkeBp̄s\gdXBF49@ Vj)L+Ew^A3}hhH{po YBKV |7{o U;,1̲&ɗG*#)iS8W%j(r^*R*JLCDhZCUͽX/bnGz 9N]TBKkb2Z)CpY L(q0$hZ-~^AfWsx3X_'?2tl 3߯J}v[-Գ\h*~?% ] #l|t(uõ:ZWp}zFGGhixMM1P F@VhYb"u,yPP<XZ4B~ڭGcD<Ƿ+7>YW6}15u?c|42(s9k9hCN%a D<!ؐl5N4B)U {P l`!KeIFa0;(\EI3g{iq+/"p'xpkCɺ[l7Sݕ'B*%v%Bj\gJ)r>t!TL@hO3ϠZXEn?J 6 PTڲB|'^*ijOoL^T*$b\DH\2܆H>7"?{WFl;=G,܇..`wq(~Bkd.oQj9cYD-㑨V5x΋<VptYlygF7̍?8#=e/ߟ8nÇ#|MOf6]̏,S'&fx~dn"іgN pu=7QA?"9 S&1i>oR> c֝`B_#_ʾ>{z=n=3aL AE6d A#*JIP LĠ#'6!k20\^7x_q1MH5r,\4:K"iA9hb.X8@Nhabt%8|yqsu_.?D75ٚ]e M]7o NAT+^zL׼ 0UMؓ HJLԁպQ.x%")-ҊmbdO~(U6 #{WVQ WH23oMR Ҥ22t96l ه r*  Ԛ8Ah:^ɡtyL Ka/,y"ۺ'\/P]Ş)*q]H)D)ʕWvQ$k-*vvFXl $"otڢT = $&{LJrHVKV,j.'Tf`U1UZuc;k&Ύv?^pPԚ$4fd ?m%{%CR-$clZDlInPZCyz?/ea/XR9)Tzc&WΗR,c(ribSV3c4I?#9 ^V>3bd"%d+$9djJC.䬏% A^3hO7>pSkyJRcH(,$ 2>mt6d#b`|X{ wDX\~_3v0 P;a]$lR*HA1XUQVjB'A<ڙ- >h!Zol-nbso-f@J\1e $j:!aL6ydͮA&R9w8axɻxyȿWa\?nw:w+)׃F).?NboAz'+!Ro Ryv ͒oً.lR6_fl{fyS/, %9 ^{BJ6]&vZ~Z]Q}Cɯe/&mɥZ;YQg7׳/|/UY5;v{pIƊqW[Q+/}Ѓy[|C@^A?Y d=&/ElA*[[:#IʺF4ޏer^9P—y=3}3X4IMC^d>XFG&c0I 9*ӓm WYvL,{>]ګ}[AY[z>q/ւ)]ow |HTyǭC8)Aco9XTTR/-!ΒC= "(ʘ9Tlq\F>M}+eW_-=0 %M!VhIg3-A`XC1Q},Ō('8'Xʒ0% 2M1R Q%~ AۚYm @׼]ِG=؈7yzvf9T@NgCRj]EHA&^vȆ"[#_`D"q<׋4g< $(N æ3BvdhN)(F&bT($Y|6GTN=/F8{aqPb2\:K! b 18J\f@zK 0Uh3#F!RBNH$5!rǒH Fq hNyiw8ގw5<%^1 A'^MfВlDL#q/vsKz>k"66uQ Amo]$l*Tb $$]p,Oy3[0˫Cfu&ͼ../C,H@K7&,DM;Y'$&/lbr' xgqQ©xN=,%H3x=%[j|)}0uaZsXT EiG32UZ;4 {?)؀Oѻҳ`t)唑>ݯN nm[DY"(e@,:dd&"Ă6ywS )I n>Ϳɷ$^\E/y6Y ԿOy<4Og}ive!$==BŽM7XkmY\/n[%Ԇet EiGTk߳բ*ۼ |̜7PB]:QwQ#@_3Tpg(H6VAk)iTFP霓L)d ٱRdmoLmTZSNS/ m~D% #b,*?Y'ti]AL;+(xKOx_P?B1u+  =ecCy[|C@^O?_κqYvK[a ֖H@Q.=7SP)68= !Fc 9(뙾SPٌ$Ц!K/J,#H#1Hɶ+,[.c3tzcfqq^Pp m{TH繊.؜\'Q&v][ʧ)y,I+a|2 K&R&Zv3!(Zgm5g?^__^.ʦlN־_p{6ˠb,_(D'V'VUŵTҪǡUcZLRa7aZ{ȑ_ewi"d';bl`,ZNYnIldďV],*OGg."A"J+D8B(Gk]m thKp Litxa7kxLM1*DAyI4>ɐyOKQ7YbT21C7y 'xA༷THY"V'PЇJ2wDa78k\u%`?NS ۸{6|[[N>w~(Z;T3ivŭdz\\\3 O,rKȥ5etՅ~jz33G1H%]^fR^$_]5W,.^yʾs盋KXycU79XyDmAGWxj noS/z8UO.]wbu5 m)Bu#3[JR(QIӁOWY" JCNUIG" "|cg ymzCO@㻠JQyLkS0-NPD2V`:F!*.B}x $c;XQTbae4™ăwQ x)[V#;HHܨ`[ߊH8m8+,"Q/pq<$uBSY3♷0cBlRفvbpI~U%YQyrT#݁;?οi^1Cvgz0`aH螸ʤÜq9ΩɳdzU`>ٷRY[4n('c#c_`"AG lh9aߍFGhcVsFjU~BwXDt:m⢇5[7q֠ؿ#48"tЗ:Jwp;6"k;5_m./fU C8z}*&'hjڛ71p8#o`T~6Tꏝg_i.!FޜyWxuuy:>zKJTsOiӷL,^M;GE-5L$-/j-nF.~YebP'܋y{דݴ=^.{8Y!Y3"FܰD`F3Hl6> UyF,BGsn9 ݋STǿ9:~ͻW|ǻWo?Sf_wp e &f=Z#Տ;MuMjۛ4͍XiK>WK R;GvkgSn=8>B.~=!W~ͬq7lЁGPlMfw [RTZ7TjbǼ3ڑ^J¡ۼ-Du0G!_ _2V)aS4ɪ-z n/BpM3h@ +eЖ0P?tӟ!7nb  K~"S2ZM4F$:tO+vCd}\aUAQEk9S8k'=1T&SL'^H5,C={[mvrֶ@/zu/`Y+6s+ᬪT1bK))}"M?|)?W82Nd^v->3ɶEZ[BZ.#-#/$zMK8TYǹ쪆T2·JKeq+]绺2`}) d\TRRUZ:_'9I,_E:ֵEBWs,*~^$tc EDȮE2LV'<.Ni˒}c{.91 L8A=Pkv&ŨOy8 ,8+Tr\ZzÍhffpP$L :hь\7BL(WLC4 B>8m]H/'LP%~rΔKE;g~o& L^l\^ b׿='WærR൷i~sTԋ3ag16J"X ;}`m<,Auޙ(Re]^DV>Wy%02궲#ͬ 7s /݊'Qdnyǜ O2/X"nSe]89ܖ9+CGs/ޣ៟|pkeBp,Qf-kw{m:Vɲ[N)۽#|l@_݌1#Y^ΛJL:.+ǝY bg*JWxwS_|VbL!Jno.G@fD72aa*bE(ΎpLu^ɲn58'@0;9O \eQR:{/"Om S"8S>ɛxB哮̟ B%|A£5N SjSd ,sgkGE V)Y3S.gPy2F(kHy*$Z4հI9)cdTBmHUN*}pYQpW bfp0C&΁Mnts{7RzSW7VBz˛_r'Zf}T_xLEp Loz*G@b!Bc塍 SEkU4T}  aSAdsv.0ҊF߂˳ހ^_ߵm=QZz F+[|Sï=,1*?Ӫ6.FLӪRœVӪR>񁰯z__}@)QtMGgPNp14[]skti+OvKI gA,lCҬ$#pߓ! c\+՚bs`bzBhC5#YRSFN]&@=6 eؖW_\Ѳ͏z~k[p?~K[X/Svosݟ_-{^‡O}}s7|=}k Su\W~a*~]{:zѻy3Ø݁ 4IGN+m=|]?Y9`捑ǪϷ;yeۇ5 ɸtīP˓;q;?Cmƪ11j̡5ޣ2͡ė %f}{jÚ,߷_ܷ1i) Xչ\fOzpvZ~=ukz#tvYhrj,mNA=s`2GZ+7iDIS,7Pnm,.w|X&n/mWۡ%{튟/'P$B,ɔJ-Jk0s~c)XZ3".x^Q6&[Z5c€ŚD!ԳJ*L.M[ @P^˧"t[u%/x:xRșZB5[WsX[c"ĕZ2llHg \}|ly@1j*)f!V/.Iզo8(a3y:]r;3^9^՛ЊD[ML,3PUQ'~=3z\f3t2П1Z M!y?aubtFZ~v'l N63A *OԖS}GP`gbT +-wŚ|4@*bgtH ȓJBս%A!Z$Q8A"W`4ˮb@){:L$cm(墩[b]Ћ2rX7~TͲӶ+v=OI.OŇp13dVOKvD)y]V wSMMm7EWx 8[0=SA,xdb\ CF>A\3`%X1:=_fgdt;u*rzl\ Ԝ(".pR lg9Ζv0u w$gCL9BM6奢,R[Ĵצ(մ줥J`l#M9_8omS7ֹ*N $\ #tũgl%Z]0U.g?iR@װ6*cT 6dx1D {JCm& C㬶đ[{~֞N3hj5ѕ `R` E]c9- I=tc#"*:\}ыs1a*r`[: A7vαddՅ !e3@&foggd.9X0 9:pu{ ~]:dZy (ҭ (K"N sK%x !S]YkCzއK܇! qr\?:)TLҝ|% IU&.@)DŽB#o\|E6";^dbz1o[)]A)F8ȖbsJ)z\\CQWS e꼏Ta/}2yK77c_#edNvn֕W  %m"xT,V.v:GMO:.;\gfZkwR]$^zQ4i^]!N]f'?}ǖOǂC"]+U゠gF[gЃϒOII%z{$y|tzwil]vva^ȅIY6 ^ͦ^kb:M=қyS=nA}NqY&W\֌>)~+܆t2Ε B)&b/bjxy|KlxKq%iU/-wZSlʈֳbFB&V=UBO9-; 3|>7pGF^BņP Xhf-+U aH0H̑sۓ30Mj K ;/-5Mv>z`GQQ 9 c[umdgO?LO8ɱ]w%rowɻ/5]/-@iYarV-Z"CQŠ 7%TJQ{Оc!8&`\]C{5An"Շ@o=cBOU b섪@krd'kc 5waqdc!XxN3=B-%/.-?(w~D//woߨ02QTcR(-a19m}-%ӵRujOa@9Qa6`Ij}VH[֫9q jW(E=ilCqOFaGjV\\+Ssa0.g\ܥp=R@]#oٲZ/crKMGIJÌsaq,x@؞kAG~Onm7goL6Z\,tAxPs_tۧ KFL!!M@>L!.U[yǑ׳,ty;Iᑲ/tbQi5ĒSSH|A3Ք]s"&Q&TlP,vUYY%aQY@jtJm4E%gqZ{Y8͚mϓbw 66Yssȏ߹Vclʚ|. FյuTzRPG(4\MLh0NzeCq76S-z%$E̺dM0q<9gTSû zÖhF93'ޚs=bE9l?GDlvM&4{lN) ]ޕ5#Rᗙ"q%EtO{vyiGB""e[}S%,YEXH/:/F5J|a`J0!S;Ge0*l,'2[޴d$% 0Jߟs'{<6 U?prXχ^XZ{)1AT`eL @RJpMYE@Yiut{]ۿҴ ѲrtT`B%eWCSH#A'=PFevi-ʖXYqQOẂq\1@x0w?[J ʬ[9>PA2c9 &2cvr2sr%[/6a   ?'Lx'xYF4O4ziF ,6&w~inO/]f7s;:t]rU׋q-|p8x[Ւp+[Rp{KZf$ F?Q,pп hf7ssꬓZ]WR3) ߇籼b0WEmtyWvxUwTO>yYo0]1_H{/ʿ9|!~ 8cE'&mpf1o]iZݠYҪ5^OvbmvyE7R1ۋ% )?U^|}3H?FE5;?eijNlњ\WlIZިW4T:Up5*FXlČzd5>h62ԼO]ى{iﴏ%[$ V9i^ZlqD_݁f Bp*ZT2Ĭ?tԟoD :B;&- )at?Pd!l8T!Olo}kL?׍.zGvҚy~/`Cu8QKGReXJ1Z_ebA[Kw6:@yBHzyiE|eR!NFEw=}awr,H5bګArY'}1s,vNqfJ%ZJH%s\c&E2ŒS!eES9Mbr0}MvYvY!"AGIr<&r","T ԑ-Y4l=l^YN,CdT#R12. qS6ɴ R{0mp܄ {I!Ia^_$ϳuFȤknEHZT"p@2CZփV[߂P[`:"`lfKy  z&Z7>8h!Z[R\jf.cg$TgG<Q: Hi-u˙潌퐌T=Lk߂;O{a[A ïLNO B6nS;_ ,RQ~~@]뗣ϋu"47rӣ: T)lEXNzggiꜜy`36NKx &!j%YUGktBy؂y ~dT3w|<(T/t6]g 12Dgk¹VڋjC Ρ "ۊ! cOMP؆7Q{|\z4ɌM9.t>.8}u \-rTtbYe` mξQYFr"su@'A jk^ǖ?UWc4z|>Oo3Pv1~|8wkVE79Bjf.ɻ=ϧ 69a`'yJ)W&; یhDȥ"zt1'JI(Jx-GTI˪R/)ƶS E"GE o.w%pk|a^eF^aش`|~٤_HJ,N<9n/޻Mť3Xrd1a [L<K:F́`̢DZ lh!HX;";ZՌښRdCE\A2,]c9+o0iϤ֌;Qʓ8c#]HkY.|T]8m|Q̻4Y< (N>j{wU2J9}N5, e2^8( c[,j۪Ɩ$X?!)ڔDVL&sY vRN爤BmYcF; iɸwlyZ{#UGJsi".'g xQ1{pə傌8oQʈ Ԫ>̠Wd!2䊬hJtMB@a&0)!eHEזakܯ#<%,qW4bkF-kD׈{xsESs5)Sd vLD1$ڴi>NHDI6\8\LH!Z57mdI e7\ݲFl5cGTvlUzQ^/Y=Һ"T@N֑Ĕ& $3Yg2VexPz1ָcS}-CL> hy7Yl?iNy295&!M 54&46zbA QoDMu i S'RR(iHB#}G y$]Jw(I0o)Ivy#Ǚw>$Z/cqIIsNJ}Tp۲mH(Ct[t:sa IyP^ a2 "gZ13YFӳ*[ϵKN %*4YPƐGJQ+U3)f bxsߴʶ (:\gK~>P)_lkuSpAz5-)5!wV252!z]g dd}{#xEX6;n]M8f/y{2KLdԆC6R6gcgJAPVRB/ɜw)S3M(ZVL˕7 Z|V,!`)a"a+, t A1p ; [ZX( \:Ea4 c@樌)#k*[ :kJAIdD" 0zE+|*%diUxra]+dC; & l&{M @RJpMYE@YI ^ '#b(瘽.)2sBȒ!'1ʨԮ m^PO>+Ij^9A3?&?sx}]h2 4ϼ;rֻk.&'Mʬ-f?T~h&OS3:s~x.z4Mԧ!Y\Rn(~t&xy蜱QISƎ4/;1)]=7j^W>2c&2cvr2sr%[/NPx=MO9֋eu%UW4}<^vDcmrƛe`v 91koJwmm-ɩ C6lN*N*/rcIY^ק18H%J*K9 t7nt7֜DGׯEO Vu$|HJok=:|$OZD+XVBϮ.wMNwONݣ.'5j\k+.e }~L۲_gx4GEmwh2?߷:ujWx_F48%vOǟ>oO>~}z>8D».x4 x >f n ;4n4rhT9ł=.W;ƽ%>Z(r[_:o\*|zלc9=_!IWw%E-U?ߤZVTshUJt{'qj#i떩zuǏ9z_Ozu=&BC%4T +6G^FqHnZCoҽc2[%3|IJןK(O< <1r<ˈV Wރa$$3u䖀R.޻Y.jnTfO_ΦJI_Xpetnz&ŵ:҂l)^|`gMK89tAJ["M \BLr\ -^ >ׅ9PHK|q`F0͉0Teh7: ҿAKEi*)+Cb4R$}@`Yh>*t,Ƚs.eae2j# lhr.kлbuyϚH_>χd'(UDl} >LhkP̌O ?C01D-k "(Љ psOpsoν׋7kx6EfCj.߿Ka]*Q%%Ya YR F%0832.j^g/mKF y)%z~]R@{8dB'{SBl\0/:D˔ 9"h%2R|]pv>pڿT.@;0d8nOfq番HZ4(\R㣐㴱6P ZE#TTT/*(Pn͈D^sYdz%h#&i-e"!&&>*phpLD HFNSQ~5:!{@{Q&E<^?B^ Zdnf ۏt r@‚b@f!>1ٲCIZT"r5 iՒzt:z3uifRɑ":a=Q2}fNG!z]!)|\j .cgDN*óAҋ$=F(4AXp9(vy) z%#mA[Π\@yA \k_JB^|\++%{˦\"4ŶJO' ʍB1DtFi4!kg0:gyGϬ~BP3!3WLd&PѢΑO5Ź)YArݲ{Ke|fr[,+ qCjؚO>^qwpO4oZ\ɝُ+[cy9ͭ\&6:x)[?m? VkC)# Rp&,  y|Id Zyi\pX=`H\ L׍Veip(YZ[كRhF2VIv`5rG{hv@2)ڞhcr5naJ9ύn|Ԙf&>uw /Y dD H-K.L L8x@N A Uw^AMk4og&wV al}L^7$.-}Q |6'5dTN WQ&,Ih Ϋw^|I:Sz 9M[a:A HP*x2zP$9&\I轳2?n~^A!nWsx X+бx~(˶z{&6{)I>"Q\O+`:!*.#J!xC&P((LeY2+ǗQG_| = ^kml:B6p%7;Ӷ> @^lwt؅w=lkH7t7;IMglʛPKA?)ם0AjοnwwO9nKlwf!leZ[=_7EkZbu|۩s=>Vnb)UbQHW>ȁ]&@0c""sd4(:g2<|tA6J1˙&Q|mxG6QNΏJfJ\|6'јx㵷 ƭqdob6"1k,eZ8kĜ.Owir~߈;m9Oq^1\yx:Y S;]=JeG^W^KKg]m|.g{L-w$l.`gɞBТIZd-S:W]Ugg-gs6Yd&[,|Ƿ޼Sf"j+㡮ԀORWyR,9M3Y6fMTPO'1[wG28$$fBCJH߀.h<|Z-Z'H%m B!0AN\!D$ӥn |#_5l5[ ;me6 Vą^pmRqp;tt<R$Pdy׊kS9F)$6 7|oOg<$Z.Pu>S:_UyOu$?(O+IR^ęR38PhJZIWBdUA(6gRS(q(%.zƼVPVQc%WjF Zh5vQW!Al$K)&+B0Fd+,I TIvAɭ&ZI9;LcȢ+v1J㌌Jd'T%h+O$vWY`Z!S6e"+Ck=#}DN3&drVrՏW fKF-O K}D@LoۡV^“$<#!,u!$3D\GBQgx Pi,G%~0;s]igF+]e C`HX &栆与]U~\/@,/ʊC)1ȂבE5@fP+ƃ4IK#3΀*@E+ FC4D"F2֐*;JI ثf<ўn|!x#ZVy%:LB)wn[wd.-,7#´]SWcIx"|Xd[ orx&h/1!֞!) Ue(Ȩ4}g hRrvcˎ9 8d[o z1k qd PK@,` 6ZC:EqS|L.Q] M7;?)ʜRWhxk:*V_(C~=7 K>P{Ys^6iy* d%[@I)FY0P(Nr`(Pc6`z i-\A,G4xO/+DS@#f|].Y#`4i^g~6L>MN/'k0V? ;|ee,p2Ĭzh={~cqV~qSqK d5_50mGMmz!H NeL0j:UrݫTV a;9BST楲WIOfuovqj;3أ{dp Aw&ԥT' L.d`Ll3ГdT1I|Ik)4 : JDL yXttmfmTZۘ5DBQHWLaiY&ɛWE[y.3HG R>҃~Pڛx.Qm`歖yAYgvڤ'l{2EZ/:#}*1K b=;jӺ3 |}Zѝrow>&쬱-wVyq7?:жRZ >F)i"I(jzhUqֲ1Fǒkg r ؂$vߠT:u& 0WtTwe^aS%rT ~J+p9P"Ι#v9/SAw|)IV *!,]9^/g4~1kŵT+?C,.@7l*EdMw !F)ImU bIlț@?)b[(XHJ1:tҚXXI֌y\3*ta3J2Pu! p]f)3xfjgɟ_r|<}`fےMJKXБTeXk] d*:fȖ & m?.lS5"*SRZU6nv&9^¶P*v2J&V!>lE;Npܛ"c[-[jcYVlf+ z6j9("yc Rk\؈!֢b>+ [Ȍ a+g]$SBv YG 1ah*bym>EE#6m5jՠ`Z\5,%i/sr{t٨UJFD91$)c铌DPQg 9rk 9%Kۓ5b3r׈GKX/NٌKՋ^ԃ^C_jz0uHE p†#:SD1BVaЋЋqǶ6ևv;} [Sj:~ܨX↞+Q /b=Y~]- '*!ƶV(QPcHI!@df1BBe*MYӥ5nE)vsOA[ę-0iå4BM)]5Ha~{=^azs WÀW_zO. % ?uo:<&kR+u ;}3vZ:dȸ]YgzѿXog`fD֘u[m D@|JS*IgxcP!tuOYVU`ό2Nԗʲ&29Igu#2ǐ_u~i͇Pn L0+Hy6K6>(o/>H6*Kf` ݳDPdA mo6/Ǟ* skXt<ntBh!$hTͪaReEz5?ִ 4%#k` JK&@4^ھI%БLo#;R[A&  Pn}V]T)OիQyuzߍNѿ_KS:96 y-u\L,xWkaލ~Q ']Ug?8Ktq<ƹH !Y}_Zr@uL %-8R8I_LgOh { sU]Q"g'-,$xގc-2fEfkqz_)mtCd $D ""K g1/)a2IJJAܞ~zQc#{^55!+z Y$GQ1"kxV+' AT0Ԕfg%ь'}۪0o_oc0;]ìCZƿBLpPrZ WjW:i!J%P rp;v{&#ͺT8QPDN:X:%baȘ!ilu[67^C7)Rfa@BFYt"o:Pc6`lwZgw2Ehֻ81[JQ=FG4/Y#ht<ɽ޵K-|m~|^N`<2~2[ Kks_+|Nf;/c3BCsB@=4GoxFZd:Rf~AR Pq8d~7˞m1G|T#pel;cӽGeW%q@'%(Ier0uxvyÝs#2fmOٷ}(ekLqlќ C%` G"c!GS`FгY;d^JTln Kb2)27ǯ?y4^/Fb9zip>"B o?zh~H i|H0rzCQkq-Ds\tC:hEZfJ *(ۜs}R='@x8W6Y(owtuy.h]Ҭ:`E(p7yѰL?̃Gu$E{f IX<:%LY/%4@,GYD!SV0})[ ѥ餹I 3Hzb unw \㐋>88NJXa@0KvYTZ&FHB'tb@ey,HU7EQkHRRE$I\X-P&ef+[Y=.4W_I>͒n'{8 ~u kClET5.˥%m-H +`i:E,#a))X`!Iu,W٣wV'9K&Cg3)1qcPA1j#k"ﴳށϦn{5d)Zr_1xY  {7U{jӽ J^a5}W'WbA=t` Ԛ GȖ6@:@HUʯuLD=iπibQ{ŭBTɊ,lRr)i2mJ>b&s#w|Xx+Z\Ͼ^MϺ\WK6q-frVFxPxyƃb+(È=L!˄+mv DAtN:=~&>c4}g+fN|8 :Jыp/fcuIhdRKS+ 瑶ݤNIY06X ⩣x8gxIprdCԖ{c"ʃW)hmӢCf5 b "AFQm&pӟJR %^x԰;{j LSe" -s7ўgKRq7'fKZJIܹrV0Bk[he~I V:miDY]t vOe׈GFDo 5NӋc-gxG`GH~Dvz:~^]hmҲk Va]5G޷5я%Wt 퐳/Ry oub'MhO?<^J7ipQ1lB&d1"ߓ yIS箄KBhnm/)V+n^}"<9?3MG6y@v<)-;>3IrU E*ZQ%04M>׼p_{Ƭx0ԍ;lxxl(cExj8O*er}Nif `#8}p12a-FmS a99R@<qǂ(QN J#]dE$sǠBʘes0T4}u^Oy|DW-[7"gv^U{)Qi£UoXTïS:,V!B"l; a!|#(D-r8vBPGB)(4X>@*Jx]Zo NH3t,|+!Jô !)t lBzAƪIt{MOLK>G=XcsA:qcV=A?E8ԥi@3_5$J*!aȍ{8PzQ&V; &K-0";51J鏷f!-|ܕ s#>FĘ}=eNTɪ$m:Lz!mRؿ?/>ß?tdCM-*[/}Lr:B'zh1q- OINMw/W%~I?,ӌ~}S,`lȳ$ުhR. aM{ё@QkeA?gˊtvط: b @5 n@!wjԠ9I<'2*'@x8W6Y(oqqXo±KT!:eF$P91gr᲌q$ZF8{=ثX{ ] =Ϗ, 7[r\MW?x"P5?9W]H,7Ȇ֡uN 6@a#c~Vzڱ#HU $D4ܓfπ1VQRP¤R* d豔E\$.2њɢ.{gR_9*N/ n<}ms:'uyp[8ij1;Zδh,m]ӫvJjv,N;_cc1ɷsǴ^ " ΂g䐔րub%,ȥĐS0#U(d2FIJgYrBz\xC'Ld 3_ّZ=vSzOhHpY5>/;~yX!It4,F{$`gU X1"!ƣ)5“hˇ'W G1XF 1#6^y5L8']!A&%gURSp5ѻBoW~c!+nYEO6 ρbbx(,SiW*ZX4D1`yNL),Gn }L"d橣y \8.|Ы[c̚襞f4TL> 9B ֡3J*σʽWkz/zZtp(`> նF~WY /PE7e&G׫񇴍\*>hv)K-hu"i$6NE6b`D]zm#m#X0a OޝhG3.`h,D0qF(p> 㤍*1M7m8ˬPߢ_Ӵ5ѸhFtz%XF3MǣeklZhg)>SH"CξaiӋ6| Z^HVa{ywq[אf?Jwy(cH㯇acy eP;Z_Z.{fy6=үq\/p4!9|aĸ ܛ Y eZ{79]]8=}[QvfNq؎P|Y"H`Ӽ sWXͻȸSZ ; Zm@\ {ltY-wWb\j&i*caΞ))ܼrj?.fߗvIZ&߭2:h=R>׃=ie|&b˞i!w-6 BD OA y'd"`{}쏣v;'D L1%lvRb6dyJ` 2B=UC!`_^Nc G@<}*h2|R/Z'HEm0eBdA2\2t6ؾDW80fWqoc%W2F  [j 0ec#!F!+4L @ t i X8(:ګ<ړ}<SKMJS.9fTH:skPI&: Yr1h 9ڬ{_fEv5æĒЋUc ,|=#Nv.!w>B WHP<d'x5[79!_Y:kt^aOni[QG<. @iD ;Y$&l?]s4 -(·떪g?5KEy+j.g\!0'y E$KZ5_(A荾%\fΏ#z9>t;_'pYJb𞠼K2E3mB#o ~^7Nx.Bnv"߽y sDΈ`q BKBbPFgљ8d }BO: Geo70&5F'GM#A4,`" ރuns&2ѣd{\) pl)-烔'bӨR%'$b)4 -)d ߔeKYqU&`s1,} k,vi5rokBH;|Px\/^mݛ`o=fhXF@[' mҜI dH=&YLA"́)?a+j.PƩq8`J0zWDEJ0gRtg3CRS<<(QZ,KhH q <.ϖ#)P77B 'bIܢ6LUf%"ORGymN=9ޠH(\iN7I(+PPAxB.:M(K\G9A>2ւ:I)f@Z0LF94m!-<0b 7g+aon1_/]e5=0aMe1=^}.jGVhbֈ\ԱĬj?B}+YZY?fm rJesA8]`C(QB=RBD8 EM)TJ-LR;A"%,-Xё.FPho zgIծ)P=ϗ|~@Rҽ2ak1"\Uen &XYCWFNR?RBcFK3_,$4M_KhB]]äIQXZ-sT ;[N# HU ke .NWXabbuL:*cL ,JpKYE@zw)suJgYe{nV[>DxRqB(pBGyP̜`^PVeMB;ϊ/@TWd|f7vCA^ڧk>t\&xSE2͛~?8I7 KVGyz$v>WqZx$S[`}N#CљeHqrNNx9%)fym3 AW7!2yBt62nr²krZQ؞[MN?e!pFg#0L˻ˉغ~Y\򃷛61Rs&Ĝ$Ggzm7Dt˶$[bqbk&;gRλd$ 'V? U,t2Z宇'YW5k׳*e7fW\HX82<(jcO`8u[:kPCۏv? \b,^H$}ܚooԴZS|S{L-eͧ^..?zyoV(V0&鯣y,M 7|z\c9=yVBd/΅FA\R6Ur*V8z]@xj +gW_S|fmc'GOk5Ӝiyi-p$~Bp*4:T2׉9'u^|騟/D?s~|I9&ʇ0zj [!8G5O7辵gdua]d*ErbTR< db`%SkD K2C)AuX/CC;0`<^(nB4)^6UϬRF7LB>酔ֿuU?W_4/}olאJ'dz#!HyLk~DI8`/& 5h Gj9$B0}+Lq)i2_(.q1e:W5xҖZqbcT }ľBVHLdC99sBM\jY\-YfTQҥzkhWM_Hr"6J1oʖkS?{؍d/`sm!IH0aik-K%wg}Wo۲eH:/jm(j=/H >GJb D16 ( ѾB-& 5^6KU}{).ovL1="*)qm6-]ݴ7YWDzkfE)4ȓrf0}kTHMK Y{*5n|A` q^X"KFQjBH" 8ehz*$zA$AR- lB#U2qX4c#[8(g Mj g徍32>_-7C?ϼhl7< [lI3A%6)c'adMҥJv~Xp1wjӎM6-igϫV"U#LΒLfJd3tz}4%gK N!#2hJ mMKcuu!A@4uZq{:vU7ObǦZjihJ+[)A<:Q+ b"PYXhJGNM|)"(Q{4[EgO,%YĩvhdqRljEaȫ]v‑Kεɑ*D,IQ<:'*Ʉc"$ CawӎM!=|6;f8m5j+e?2Nأ&Y`{SťKI,-y^,%Ԓ&o?Pʠ L:4؈)g 7;ߩ8HD#1 Dx_Q QBIҷY&*6lw!H]);RĆ-QҤbƣ[<ŹTk"Rb \3hU3 xgcަe@ĂS+9͔Ait—hwŀ^:8c^Rʍծ5i1pͅEk20Mt!L3Iɼ6V!K8f@vh]17]E~tnA[3ƛAN/&T<t~ZJ;Q\Lkcs4Ph\ԶqRU-~%BPs޺gP rX'IT!N SP!G 8 Qsμ8?I9Y] 'MH[piL= _.RԓGjfjXjXQItdDG搠+_IK 4q~܎'Iz܎Inv'v۱BԳ\"/M c`p.O:udB !hFW]$y~-Юߙw:ܘ'RՇӁl3uKm. ,?Z/~ E4OL&1ȷzz+eZ=ʮ^ ʪӧʙyzY#8 Q 5 d N6t:סj:%3u:w.OgvVm/O!8O j&QBN&`  ?~}]j04P&߁jXҌ{~SΔ{Z@5S{Z /,-]-d)7Z`=2WKB\eqstURH[4WC,0{c77YZEv\Rjޢ*yYy7)WҹWìf+Aΐ;{7΅qp!fr)m"׫ ]X 0Jh"ckߛmCln =ݗΒm; MiD6qBÏ.Ƣuj@LìӃBٻ .rf3*dc<0/9P|0!RrT'[Iwt!m\@6lor6c˻*ty.nw ,.w?ؑŗj zRWq&-T 8Љh\9QG˵锵Xl"; mRL{83J-9:<π`9#v]So ,Zl)$17>>ǶjA}zM4֒\4U>GX+a#|V>GX+a]+a#|V>GXծ|V>j+a#|GX+a#|Ygfʔ'j.NCDGx |>+hU&h!m墉wdQ|w\"Wtn;ҝwA!%Xg _縔c}I#Dl$L:[hDFOu42QhqAX_V+ y* KFY]ޥF]\N1Bs>?B&yX53&}iyٟD]Fa `uALūW@yiqE9塝Z~6s=Zkt# <4[$ HCJ?[Lgt䪓s3 ~Nt߷ ?L;žjP[/+x< /OƚN9"cQsqo($ŹQjn*Z(-,.1?1ۃ<X1-KO׸>T9$or;⡇Jԉow| /hN>ߟt'}q (zW`"sf 4_mw7(47mkFӜdo JuyEwv'$g[n8CwMi^`{ , ܓ;aŌ6OJR)5 o5!}1ÙY}p[l)wy2G'q٦_lw%kkW{?YI1BQ@'њW}2PDvkk1ABX ChlR44Hu^:22Mёs YPE hb4h"rG w o!{Q.B"1\=*r5]%QL'6U٪0\m^ocO4[iV`k:V ]'S<̂['^CIXo ܓZv"ެ3JO%Wf$MFi] IE Rg-ךRO dji.*' >i 8(YJӝ%AenOgq{.vy 6r~.o_pG q[+߽ѯ?W ,}1i1E dUO@!dGwU kmg'l+z%VW/s Ѵ^HXU`-ҍ7 IV$)-N<_)[{|d)3(a\R#m| IbQxx V['8빱1TRƤ\ X$)xF}GqMVytY3B 1JU:+̲e^*a2PI.1u\|Wj.̎-\; BL{)Q=o>8yTH# *yaLًƹ`^:5˔0i%IGp:.I^899y>vphˡul8\ QXJb& QHWl-n6YZ͏7T BɎBtoۚC7-cELrZ. 7ڣ&SSCK߫z?T?Ǻ)Z5=ct2D;y\iMeӷcg$Tgz>&EC*IC>i2yd̂ǎǦش;,go:ɪSkkYD=4ī)^9\Γ*B%Jy#h˩*WJ4P1]tPrȊʵO/?]TSL EM %Z P:&rm{~RgFB m;Ͻ~ℑNYH>u.4v{$qIv;`=uL66dfǯVk!AVA gou8z^Hd|QhJ*' hàƁӖȢO $ɂtk~UYY⩴fQNxRhFBi(E!]:G$#{hvnT$j;Zqm6,s۳Om?tYMgEE&&;= 89Merʳr$VC7 5I&2^ 1b:@ܯ6Kvm~0Rso gĩu-MTl<yH*'+rɕRr>F/jza`>KP<6B0gRtg3# ַ[Ḱ }r5=iޟdȧ6Ci 1Up=?kƬ4W7{`?5 i h+9wW,[=p@LB$dZ!t4$fBʑA߅}s?o^O'yޕbkӹvv4gLPCumlzYԤTA V"QLG=}99稤."dA଴5,31Fxt*eLkrBkwZD@4:1eHB\)"\!Q(`lg7t:zm/W5/; U8C]pyeYw#)d~&Q;xYdףּaUnpwГy 3m#ݞ^g$6XWiVWP nРc.6 ]Ğv^A-̂?Y_b2#z/گ>׻HިC_́fn\AԐ$t:V2IlYȬ|JM(B>pPh*` ˈKK3A@d914%m$@(MġRJqH$?" @Ϝ%%:D'܎";hpe}(kCVܢ5pi we#H6 ,#,A#x&Yg(۳;#HdORB0p;.杽vEtAH,qK&$:* QYƠķ}"kN`=@jSŸ>hv)K-H:4I"aG<&Q6ꕺ셲s ~/^wߟ| 3'RQF6}'#j*,F([Xg~q}n^` ,ӂ\I]h!]6ٵ`s9fx+CWdK}6Sۻ)iwlFm@n]]y(bHW /B*9_[fǩNp<\Ju9^]NND#8>7l2n";#9="Frg:˕9ew:YO>i:bkCfk߂7Yd6JIޕV}I;jV{ӱLkudoCuۭ#+*G>++d{%BN)l}ּQR&?ȼlV:Sxͮk1__ylnTȆhqdr8S0oD=;g#W٭Yz@ݧEVjf_e/'p}~l;\B45pbt5G;a{=GkfDDkEiۅk!Vc-XQN#Dj1,iVL-{u+S1o{=.ep鲰b;9sy퉷%Z?Hi}=;?, ~l(]OT@ %R[~9Fl&7 }%9}F#\pt>gE[޹;U|)hr+gh].%4=v#WԔ'}%&Ÿ{&fFفQhcV=UY{.B<.`ooP;(eھԩCqIK3[i6kȤ-A-1 ұ`sixZD Gz?4ľ~ASz_ti69OCĨXfmf(Rv2fC wqJGd0{Sj'vC7 x_t^|ڔG ɴ ldS(s,HRCC>~2ticzc>ȳz-ml[YxfUB˫>:hzM,n9JCKgji9׫mSOMwћUк V-??o!ޔ`u({!v֡iBlc (lveqX6M_Ԙ& zC䉗 *{e} ޼o8y< uP `@%!ơ6ʓ崼]cҽ)FjٰX~Q%_?64F zkB-/zf~=^0GMzZo&2zIZw#їt3ܐ\Zsoz}N*ߔZmGjŵȱ "p#k3Nӗ閉AY;UFΘ=Yot䳚xlrC x Bij8$#B`dP%e,xOK) *>lȦ @b0z3b?iɼǝ"a(/,GIpа821V(% 3XcN B`EpU&Bo4Ls&7ﷀ;<%.!Q!q Ou֠Lt.b>@rXfzUI} F"e>3-:t`r2c#8A9!DAƪP8gz8_!~X8Y$sz/YĈ"$Y~›8jJ5l+b̘(ۀZ{'K0vґCY:Kt̨Q )!ˤg/W.m+Dkظ'T=ؤ9?4c⡛_,ԯ.jIy3+(fL^r-" `BK $_(@ ԎЛ ]K_PW`Lˣ43ʰSw2%P0vҩ68R$#ZWz\Tknl+ښhX܅{Bۧϫ0X-80H75RQe"gidZ&j:LD} ̴kIx?u+}*QI^Ҋ{ Zg2v3LW- ҙ4[kŒtdơL{)ZtICk nQY<;K16ؿS^l`wq#;nQGI!JoqWaTNoGkn*Ou.ֻqՋXi_lŧHTr.rD {^{ky=ϫ2[.B@[&L AԣV$Dl, fV֓x?Y{x:Y*S!DE4RұL=P)=A$("2H X,X[ ~xd!ҤD`R/5eDDL ` Xy֖2&"!f#[ubf+/gpsoS.a:v|7vqUP?j N@ Bk=,Tc4앓V/L\+j3͂!;y-H3ʌ ;DlT CвsXwo@g"%f-?{uJϸVzMQ{޲|"˒qET~v_Tv|== *r t(@15,{˱rd{"$ H*I;$`1:SGXse)3gŝ2(J]eZ30{-#cR\fܙcq})ZrfeYa5ϼ9Fkw@?Ъi٠`]rk-}tQZ><uȽS@YPwV7@qἯm)5.vkں:imyWsnf^>_Ὰ5wy4YyU_=0qmE'n=eI,ڵI{3֍\o?^iT_zls#4 3|=D-`*"+%{x3c!6'LkY( y*!)MX|EuvŇʚiQʡ #UtBJFV#)Lc)T&+G s0F9ENy)GUQmc4-z@݌d^ܦ٫}f7w9'њz ̈(1jJ1?Xﴋ$!Q 3W`2\Y.4< CD)*Rh.;W,YydcOLIS!XZYh6.w>W6r+4># ߹h9 yw2M{Rwï)JoWYolE#+4DePn0I"X \o ցt"PLy+ԯз&hٮ#pH`Q_r[vC{&";R%g& ϣJ % zIDpfFPގILokhdގPǴ(tr2Zr7"`3"6h# <Ғ- q- qp˂v?^hg.VQQWf$, A)ET36`>>q_&/c6e8,Q`FmߣߚXc>bF6ݶfܬ^p:=:ǟz򈪴nhR/eXd:-WС㶉mNvq[۔{Om*rI˔B,V-?vnvhleF)𲸜\ #7a6X;},P 1j#;Yc;S߇_#rofu\lE]N0E(g1P|m-s$i.b[X͏){YwFǞgtm|acH~GnVvK hL0, S Fy* eŷT$mhYK6]6+ӀDYZ֑7E.Jz8YA^g}iŬRD/7-._OF „ -c1K-.?iV`>  b:s/H&AehcFmc'uYZ^\}[I_wP9)0sXE1|vU&z jLcolj2%UDϮn7^`k])^Z-,m偉A!e8m_GD@'? e˚zW&4B;̵*ΨίmWj_;|!, 6-W9?"47×9;n8]dMe\~bΠ>m#\GyH7GgΙ>J[J.T<1IuVwܲAr;XNc|LZ+63tdt/oj_&"r{.z-//_<]J0q'1C,AIgqF.CF#NY]Yx)ڙ)!C PƄOPyuj GI.:(S8B &=bI~gaL0M1P!Ffx iH]XzőEqիlS}^M(g.BF*<`Xh iցZ 'z2 |m[w`kzt>Y<׹1й1 S:Aݟ:o|쵲2a& ¼&)+'jqLj5o3ݚ䙳s3dW`vٷ >' **G1)m/v̪lUP屧/*~JOC޽J2"_8ٗ ԜO@خIogar?$aV߫nll1elF@wU{ݭx8iPUkͻl QUBkߎ$W<<Xx xfWɘܖťE;hr3'JuB$Q[G%מ%^WTAWewYnۊ*²}ae4`R2c4a y=?gZ3jb(i lf*1*&霹o3THY"!H@Q[#)cd%ާDr(vɀpC J#0YDR뙷FWp3(FrT!$LBSC"ƑhAA!&b଑|4>ǨEq㫽JA(',iV)WM<]G*@[aqV7m@6AgR|f(d%)\,YfQNWTa%i<#xA` I8/ȒesBrI@nIq,tJB *n)&dR9c1pdbfl QP"yy֛۪L/dQED`>P6`ҽ FgWαE[J)kSPQBL!)T2C]6r\SOtQ5OǬwq&^ Df$6)N¨#Ѕ9v1pn4 *ޔn/rmZkoޅߊQK3,b{%6i> 23T`=>)<I*|Ӓ ԐQ2hyM+ GQxd@iJZa~(#ÏM9"+Y{`@zU2MqI)nC2F2DCn )! $"85IH,e>ZDI3>Y#a4V;䋓YK}uÒM(y{ s;εɖ* Q;ґYPj Qk)Y/T!=;L /<)5q[EVo li E7(lqP76ߝ8̿~nao2BOY J*٠^(EB$Ǔt|u>HXZZ(ckm#g"'l#faٻDG0t/aQ@)ƺ%%DJ(EjrKz P#R$pp;mpV[8Q!P)J+gM[&f_72,3_ъ+6(5p^~S cuÎmYzwʵ!S8%wTG`QL(KjgG*ɭQ͡H/:^jKoȸ @hCMI!@) ED@( pr`Oo}&zb]"ֵ2F :iB8`9+s.$LIʛwu kd "yF[8^XTbcpd ,!mR.`R_|~͍٪qurwGՏqZ#o٦{P18?D矚vT.&Uf堏kxU]؆4O45\&RMЄ֧9{ݩ{s8~i8>NZUx6㟍흟/, fwᒶͼ`헳ak.hq'A/ tLH}#w úZDwH!G ̫Opp5[͘ggUݓcdIv5W!`aIvJ%Fc_}9(UmZZտR-VT!N־lޮ)㟐}Oow_}wL>~7~z'ΟpMA=<OY;Cu%fCs-󒳾^ǥrǸwGqeϏ܆8Pz0p6'}5 xlS]bv\Gl$_ob? CvL|[>hkyMm>XJ+[-K}O}-eY˛wyif3L~?c(;k+<12`w2]!m|m :&ƅ!7J)&'QzI綌sݐhiI;:О*JAQEP")<5Y GĘsjR%Nl!Cٮ@6oǸjﳩZKwk9IU2<3 /%3Cgxf(b o'ÓB-:;C™=p'kJi%V"N;%$Rs]-Q!9&Tk v!0VCE6 }|0sUڃyH\kJ=.h<"&"ygLPDI`zu=:UKZR(y4}o 4GUvJO㰚sYL=F_:O"7\?{[/vo~~>; }&h8-߷tYa`\a\ׂۜn[.xAۖXv+= g__s *SOPIzt44wH\kDAI2!W@ 8J^@I &e-D!V%|f`Q$Hq/Gb!OE#wTRJ;d >8) =+ [Ҥ 0 S{EQ Z6R" WR'#Ւ(:YasA^".~xzuF}N(7}wic Ez0VaB6a QSym,qHz#-SsP/־iZH[A9pG@@` EF)/&9+K EJJ }(7 l-y&޾PK회FLS XHy!]ԶvJHe>v >,W䙕2duJ+E yBLFٸɉuVHHeKn-x7CJCw`V|-if𰂳$.i9ȷ~#FqF82[] }XywG׮=gƿ_ë˫a1;!@x'%뵥*AH`EHa IbQ-ID9`T k] Ip (/*xҤxzD$"Y'Y/ʐӖUie+b l+|M񲦺{ϖa'_1!jol*F^8v#pb%Ԑ5lNF83:4fRݺ+in$EѧI "0ѳf.ܦ$*eZZ,NeSTΊ*LL>:$@߽}% ,t"F*&Pf7 z1KŴC%]NHP9] ӕɁ3Bc6UbnkwP f8 4&m}f7A9g<ϣĬǻRfy6i#V N6^hbӳy=~lquZԊo`ⓉBL!XX>e& YUoϷvC~:LS% w9Խ UiOO-<Ro%fwoh:pk{| { 7?<]??f稞U&[UJjVgV)U`*xn76ޮQZ΍)Rk`b@R5ɷӲg<Ëpݺ^AݏQaDs|9GCV w$.xGF-ϓkZ=Y^qwsqo?~|M`v[R\? _Ofezj}o7D.O~;ͳ?C#)?8# T<;z*~6^nz̓ӓ./NG]]ШC 5f󗳋>iєcy$'gF?t/[uf;D"9x{pkͫ[ }t2_=[us$jjɌjg4GXl構|_KRY+a-U:ބ8s vd   &mo˿ȿp!7!Q1vvˠIPIdqQ#Gm:FĪ;(#H_FX 9R(eƄH^3]?G PdyP\RkP,JfbDņxH+ Qj"r.jOTu["tb)"nEPI6)MGe=&na-؃ل)zp&uq[܄|!͘nkܢHAk[7ܘ΅:Ɨ%uw|s=65+Z K\ؒ%)\q!D)1[ XZUOΐj4yT5}$ b}i I$vS1ZBօ%p-te5"vOyWܗ.xi[2FUc(Z/m12aĨ,2+BOS>7MP`>e,N(oU)CIbf3⠖"EԺ)t+qE ai,\Ծ^WmN]e>G0H{|DܘJُN=X 4JI*&UJ;~%Md͓{-J_\-M>hRv:0>t.hfKdʖ;I#e]wrҳ^gKVC$lty*STh#Y֪]m#CI9\)tٛ"Q"  mL`<*Қ[vyrӨWs aYXׅtW ҄ 7FiAd{\WIY^6(kۀz~4rkYcROV5&_: ^vYu#z;~++k!Y 7RL֓li9dpcƚ)W͔'fy(5} ݝ9*9_Eih"HRNyɠjfZ u:֞OV% Ir^g '=g>Эr)/^᭗A {8?RW~YÉVqDVYZ'GVҍշY7js^cN?HVrT¨t$s5CW vY-Jyբ#Up?a"T,%/ALɸ%IΖhcΚ"QelDuCFJrJ"偭]wfϏ7gu֑LDcVVPcf-R( uFm֛Y L٣"@dq*$?{#sYx-PyΰUu 4LaɃoh(6!T>1Jc Fc$QEluoI8ŋw^eU h|_8i%Yr&4YT!Y'^%sڌ23F%eWʣ+ۣ9vS5ڛPڪ{AcuV;pPVEL5t>To bvf;Or\xg_JgE|'D| m5 >@`"?+e A;'e dc'#iJ31J˜/W-MX%21˪Ik^g4*%UX,xGIfR&7N&w޾˄fy5=iH<ߴ1xl-s+nL:䮳UXJ~]N3w _lWeáֲe0Ѽo6 버A}W**Fs2vWyGb=e};}FteB6`ѻ4& hq&eP9] SɁҠY]7LU*-lzyr0Rb0Sg9cMj"DUR%֝O:w+cʚIn_anRYsBqn@CZ#gKC V2|g|MJ(}AG嚷.P|T +|xk#u'Y2VTJ%7dW+T|Nr[ۓNNMb:fNnZzw$M 5B 'FN܂B&2'l21AdeZ$:ٲFΎrV?߀Ռ::ĨtPjɴ6)`ier(I֑H$ I%*ut"Ц|=$OFQ_9Ϥ4"EAUNsT rrΚlR*H9љZZa6Oz?gt@٥,EiqjIpI"822DF*:?Rh 9u1M񡴹m'jsDh&v x'3qr"Զ2i3d\[entQk&O.x1"x/y'h|}(+ep1z5-k4T*6$J:"!K6yd'$SD!ɣ) ra pJ%2X᜵ bχt"tI.RR5*&f .*$*,V(ȥX^ mTAy) x'xkZxcxx6a;|ڮl\1z}Qy: Ag٦h]ҤP]`@ G/*_:joJ1"^o^OyDϥ5r^J#iX ҲhL)}Ґ )fC}R* a,+UuLzY-_aa29#w$,VGՔ8s4dalSz_];Igw!ؙ>dnE.hPn,;GLau 6 M1:*$|ֹR`*NՁWnVũ=SR,ĠtF\N_VdY;g;ɂTbH R 2j  TeQ"lhk:s@ LFFޖE̵l!FΎ}&C5]PS>O_"?v/~Zx!Vx5YA|QS7;}9ؤ.b_(5? M lGfܚ| 5GHCIT@.asBn*hR3>jx2l嵣w[lk5>ETn+:y0PbU[\y]xn,WL܄Z?z>$ꕧPzd/Z⸦QT:fK}.ƹv#+[JJ %>7נEp࣯i^tR|mSg mpX)yΙz'J0:N=u{pS SNFAC4}t}(l)YbHL]4b{<}=E%\ez` :MF&\"]YG:vQIY0.{F Y iG&kUOyޚ!Lɠ[d^hx:7^g%|V*iv4O11 ߟT=qʚ襞AQ1kI{{o>xc<$ D;oMoЯۦhp(6ev)>ْԆ/ zo<쭻6訲')'3eJpF`v9!LgomnCF/Θ]6jMX)X2YWW=p̈A3H?R~MjyW:W߾?bfS,MNoZ㛴H,]gOXK}:Y;A[s?77|sb]|-W}+ -h_|T˓*<U>D9GjƣɬvϻϽ6 {CdX\Xnԛ(k6)[ٶzΙ'.in2X\=ښZp[͎(AW~3+Jd]#36@82$~ȵBy-hW-@#plUjj*2V>ILiu<'pjZy(E]:atdXa LF d@gMixB*`@o=+5C㞧a; >º 6D@l#?1=<5E hbTS9k#cWa:v$:6Щ!G5 Iu+r;Is@IuԮ Sު J^?+т?PWcr~8>͖8(ܜir NiO左kC*偸 IiYןq5h'^}3NiwVU >8%|Zbp-9*L)Nh) '8K?>H^kf (X#flz\̒xM5K!B2 ?-̐y^_uoGfZ .ǧU\e>]ON薋?>wNLoWtN\&wk?n=S37^2:T?jv|q0OT%1?̃|mYە Qo8jV}$[. i@=|uo?* pT榜ޕq$ٿЗSRcᕽkca`!3#RjAod&"f6Yʐhά8ˌd~|yˏ9)j6rvJ?.jY&~_]h=Go^8}>}.?=o)ev ԭ ۼOn_~ٻVmn=Z| DG'c`==[+÷shk&5OW&3 7KWm#ʈG圄kMW 1˭2b_Q}(`}@_?BpV )>f .֌b|ОJF2" "?"Kq`,NR-/$D1W?8;2 ŧ|VHuQ,J4)9ʂ])M PMFU(LY(A7.,peїX~е%p*=p/@(*ssnxud'%eоyMg]*H۳%6|NC7oHRf }DShGrI1PɧA;; hw؆]|Ez h%g|\uV8R@f[A>o|Ա_6r}г=[Wy+8>^4+o[k6.q=vҺ/(̣y0MuZ?ܦ:Zcmbm s@msbs&)J)8nM QkWnl}cʝ36%j&x"3+=%X#WM#3n|az #mZ1k\HBKPՍK%:t,ЀF-rKytxԀ%Da€4¬E\!x7?]",;.Ҁ( ;.vde2(LLk8WO6{ci?qiY"Q (ФK4Ϋ;ིkt58w|o"kJN^"L uS *1T :a58o6r+M /ST9Y# .W# 9JB1-Q;ƙ\nUHPB&Sߍ1WYfJf3⡖B𹱢6J/jQ`XJ.'oƒ6LN/ݷtqgei3L!4|%RtN֟(s5M *YlmVvHGUNjȚwP{PU@ 18R%x2Z bILLd'hs!oo@tVZ4 ֖pJ3[L2 Dc[&[xP[Q}ӷ4^˳e>=^]<>>y|<-6`fnɔ Z2J 16Fd*:fȖ؎1nd}h- `d/:%U5qd s;C -؄8:Zpd;s,VtkeKd'; YZP K94HQDr&Mઔ+&q{֙1hƦ˗OH`PhJm V2pZwOT{0X_fù֮Z!U8L>ETE,U/Yb6(KIb-5.#f&JhcHvYO2-CNjVGUIdsf&$d|El6-ğ:]\vk\g3).Bc'8+ZPg3;ɄYXNF Wla( dP &x8{L:=4mZ-Ou#oWh8[k'ApS=E?}cJUM٪\@ݱw/[tCqzl:APY4;HyQ w#wwA1Ą6+AdwuSjZ*cR%%s`q"cl;JTK'IѤ!KO` !a*T(Ahui [ 9++3>Arg̋he"t!@Kc,Թ햽w&5f~TY%3XNLև"A0. (@Sg ) i=7 N70񁽟㐌~% ċ d߿.͖݊qBGH}'n-#5GT hE%bgP`pG8_pǐ\6%[ -h hQFQFIFNApM&.NFֲu6M[1Bn8,ի?E 'sT&Bf?}Vq:5<.Iy(} ˄f4=iW=Ujcp=9bϱKoaֈ1mͺHvݴyJʾh@tL!;P:"tɤeje'܆ks?}zK zy7gMh^X$t)FAHktmS+t&6nw%Ͷo{{O4;>^g_|vU}R_lvl~ɽs/.}շ}j~~>PV_3J絍cl;<< Y9 Yw鶍!؊NJTAx׶n'wOPOJmSJ@dfǒXD 9ְ\\*K6GI1&7)P&nT; )וJ $Ik | B"Rd ANK࿹ufùAӌ#Za$QOD"|DW.?UTY (K攝Z7ksZQM]kԩn(ͤ4wSIϞDa(9OQC-:*d`44 d2QԎJ+dA&%ePTP AVx&rDZĠYPz P^PO^\%{,J mOD EAɄ*&X2`%"3gSђe$yM*RލI-mo`Y,l˴TP1Bt"kvb)wbSdATkRDV(! j5ZxIO=gWRf*Vg"%Uq4Ut aAT!*%1&cIoߌ\3ثܶ@NrW'؅#T!~!{j_#\ vqfmt@hhdBYOs gDxlo@`˱(_3W|cۄ@= `-O _URI?{r \{T;'WRW~rT@&(ȩ|H :G" v{f (aJ]UASEXu1x\c>5}C*~do)Y1t6TNoHu@16bgK|I鴤OJLou|:F-cWeV"wFT?fAYK^*y>;8܇&r/Vt[>N]O*=TcRYY+x6j̶R.;=0ctwt̪A.(;QoIDԓ M_^Έϟ=(&H9DRG24 ͊E)X.b 6ӨGdJ%jLjUWIR$1::%7F#g]ڧ^O x҃* XӃyQ+_ MW2}0,(ӲldZ6r?LFi *53-b5@=N̫X7۝sx]tb š˞Dmdv!/qj<8=82N&ckp^:-znmkv ,m ıEDaG ׌[CR@TV+N$KU D \Vʠe q6  ?C;9;!uP5~ݣLJn~yeEķ'ZU߷B٧܅W1;5To~U`}ղPcyUrіJrì,8Q&v+vdj- Eś8 Z9'I0sMb쨭.Y?_fWu(7['| xJaxW=S|ns@AE#E/]=vsSQ+ BfGr=-XJFCpY`$>J XS% rrԤbN!FOZU]%U3 Pom#۝kG _wܳg t >,*]#vk0~f KͯkuD>Hb>> f=|vOdo~VFo/XlfA0[q}hy_=# _oÇ:7sޅi7s,g-^.w\%Y?s͚鯇34yeՋfOb=|%:v_7wd> FrYC Ri{JRc Rq /uK^zy.q(DCg$QwG+Rއ=X;X|NZ~tZ._64i ?H_uN&I"zlY{r68q$ҘXHk!| y^)=]1=J]f=|0tZwmC%oE~!=:$ IVd9x*>l0kB>$ZO1'j5GDxDh܏v T3@ GcbCYf|APE%bZU}v;_ !nC/ XtWvF ;0baӕ O >TMSRIW>Hjrk r:83:Gcz!묫#a *b,.8ĦQ!$&jdd34s=i;Ž p{e8xln4;Zl R̜c$*(:[OѹQ(IQi1 cFԣ=uDDd/MW[ FN6꣊EX"䬲4&i欄o kNJx{?htu{OojQ1M[$ >?kVf2ѹbVnM{y}?}8˓\ Nrv*JPcU9_fAdX k"8Fo,_ڹRջoiWiY^j_-C:6Qw'=\RN˫?~dDb6˰=+~֡y[Vl[OտC,F)Mܼ[SAb7،]Yޱ/}\o۪2 :l6=K_7S zQ:<_# ?\9~ϟ/RnlӇ49]]ůTSELT~eW&}Z-Lή8l>o3ݟY;|\]\= >`loxo6Ҳ%cK6XUs%B9T*WBtB j"::(80ՓnOlRkF #څ{U\ᎣԊ6|{t,:t@X(s0JF^Qt,%W XÁFCFvHJ>CqJg!5p(zX#ׅPkșׁQyJ>$oiÃݬ0}Hb?,8y?;‰]n:Qɟ&Ij/b,OtHk A,nJ0ݨToLW` \5jyFGzpN\`#` :)@-wRϺ8+lU#w8|J+',;\5* E2;AW س<j92Z{jTJu7WVkc [:j9j:pը<>}pB郂+\pI#\;!Vo7k,[5. ,4t:k8Գl>cK[PׯM 6ns0Rmda?ߤ qSdO10W;gVQMzZhc4jKaׁJzcFݹP["= aQ[W߻[fŵ:E=Yf;u_ 0ѪOh5Ȗ{F POX9O EL+e@1.Bx6f +XtT`[as([VmgU8HrAǒ cm)q3`. Ka8ֆlUMqXbojHCDZuvZ֌9&*@0:ɘ 㠔Y msQaEHظ\s(dJC 2!bq]g7&lR5Y UJ!-teZzmP XɏI, xLt U_*Ҕx"L+8 2|,,2@ Ka@עWgXB02ف=`#e S ..)f18yP11U0$$ yp&hWMڂ"޺VlᗢH3ft 8e M^dvNR"&Dh&h=v3lݘ9zqXU.fv%cc{]@ nӅHBkP\ڎ҇ hB)hZJ+]H e40)HyCs ~#2_8g&PiDd"ȫ2FMhC.n7xѰ8@=KH@'kUe 2inuG͠"qn2YɂNU~D}%*TU;+Q8iyYN";XAoݷs+W]@tmKXZ>bM=A!%xB>n>(WH@%|yr m&#Tvk@)@2I =F ۂSBFk@i n!;@@`*;`ی5DBrF,zjtkE΃ A;R, c~6 3(H̔dȂ&J[ce@ArQ+CFdU9\-J`,2a!I }lJYK# Ӊ6ق+0]o0RvY$5$YNDegJM)>Q{mDuXPhMw+(a:Jt6? JAKvaȖZ,6mkڹgvؽ?ΏfӺ\GIh hvV$L'snc q0`Mӿw4N"aZۊfMYѷT]NHΓDGCk,Ӷ rxg{Ff.Xd9nHA/kt(%Ŷ # FI׃JDp  R`"!5@rl-@OOdPLsc-a%Öe T( .&;؊D4cA\(\u[9,BgxyA aP(EiQFa6m5y%#1; =yUXWQŠA19Uk0ۙ}ȱz`JQL>*}3^&@&c@-\ sȟ׺M^noY%sP+ Qє?샩>b-”X"awѪxȴh  `aM=g@=`eR]HO[pJ7a D[s|dp(=ZM:X2T4qAOU TLCzul*Xv. L/fDVx!#i!:m.Xz蠀+9a$Di`Hm,?u+Δ<.8!  T GR/ZMv^mg: AрMjХZf]Œׯ7~QқPx oйzsE7u5֍uW8Ia}:$~\{LX8\t2 -OXX\/'+N#4{G@|;Gp $g?ߌg+F2׏Gp)<{TW/QpW,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\1W[#\`mhWdc\Z#s\J 6,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W*R\[4 *=5 `^XXpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b WpK LtBGp,h6E<{.o ^Ka Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,bՠ nY~콞Ҋ^ubn ;Z y$=@kxϘaa> 0}}lo@+c`}KЇ]z$]Ng N)9N5_{6;=}GX_wueL~;ygӝlhX2MyC&ZgG:' Ƣ1yQ"vx{uN̺bR@/_tѤEZNuqY_E{:<{ :E䴟OQYK܌{{J?& ]ы(;WxW\Fp߅䴞S"HR?׏Fe(}CLJ GbsU~9ޝMmthQir*Il)z61GF#ZZ6G (Msvzrm]ʼ'R8Qvx1[S]^10D寣^bågOcVf 1D tuv) *Xyd5!tr`jHTidֶCGj} /3ݘώk9hm鲱yS~vo6 lneQlnG?p_mk@;v-%}z5({ "`o&|xCrW/^Hv~zRŽsV?JR{%e[het adMw0>j/{묐^ E/jy˄}%(.%ho#eOFjic/f0?%<5w%]1[.mV^Nik61ogc{>Z>Ods_D8OV)>D>Ov47idTdYCSkƧ,cխY4^|CvIi3/~.sq\cMR1Ұ]G #^"h ]ewZ@~%W8aUpz(AEw~ #`wy2y}D\ bBj?FgH|ɧ]E(N!f|n֚'J-f{f\i{w=x=}OiZٯŴu<.nvL e2ցw#UkYV1'v\8̼z칯ED,aeߦ|>|tr-2/E-.}H-g{`店w^r/?wuw4xU'uެA -2ol rԞDX%2Z|V9|HȢa$C5L=P'ţx X6R52JO1hƯ_o{ލhjF3@Z5<|j^Ԃ lFWdp+ \=/p"+czp#2"^kX}f:KW&Z FWVKኬT̮^"\G]ePZ'$shA+Ջ+/XJ)1"s? ֎]_"\aiùP.î\5"k2HVzp* :_rDpE8шYkA3tf_HӯONFfvOWwҚWpeQl5ϳگ~wwU]j%ŧhb۬WҸdBLK+DTJwRwjNMxӹjR3[+Bo޷Z[ޡYG\`7B3k=;.\  Ҍ\5.LgWd W/B)u' \A`Y)=KvAdz7֞GӣZ&'tNn{: 󋽃Nfe,զєӟ>k~LKC5T(YD[Df":-'@{]<}ō%ilv2웿}W~CkVx5^V}wA5vոU:K;˱XRvhgs-d 6v*NkQK1y:iA>݋~cWm뫽I/h0\& XbyoÍNnޛ9|bO ,|PYv~~WDWΏJ#)R1QKWa]JA9xś}խqwm%:7]\.y]ߣͬݬw쵛uxr:ś c:@'O7#/"- [׷w^Z.c[4C9?^:В/P1#G.&ĘַJ%Z"I5t!tfmѻ͎y&/`Q$kSY>iޛ}¿&eݡpm6(LJDƣnCtx ֶR^*eW(٨CNAh(N 7zxgpSp;b? >|v~QfLG>_Xؼvߙzsؿk\2%KrЊ_l2_XnƐS}=ݺ+껥[>xc:G]KosuRPoVF|8)]kCA @V:StQ1J7|||Nf_b3bCA&]ʻMI ň :mix[G0w(8:]nVOۍΎy~0Xwq86 ]B+[E[dB#dUmEߕNky\O{ M;őN@v \ڇ] zW(kradoh^Mg _ xu8K=g59tv6?;Ip~b`pego.&,0 YXYr,9~֋%Gm2eˉر[b7pdd0nkU59]3GBْN۵ tI)Mm9gJfW>ں~=rr:;z}- iV?̃鳕.9zbć^[<Pu$|gKJo鬫݌`yCJ 8`ye =i֟99nuɮV]ju:R4|6hӰ^7ʽb6^7W7鸩1ft/Oi8_7?\7?/&]"xh~}>|eӦl^yҩ fgvt7M.whnKfi1 $/_^ qףݗq/ YW?Bb-dT-xVtw]Y/H7>hw|me}Be~q'?ZEk8Rμ$v&I]L: P+0y#d-3B#D9G4v IOmۼU?0[t)l.^q㎸0G/^o;>371-uxrI @,8 PVG!K.F.d!âNE1N.9|o& Ċmf)<γ=xb{T3 -ؑoڋjڇъ*9wg, =?Nz?i| `%2N5h²0F{mUJ`'7O͓{ʔoq'ލ? {/?V$Fxn$~?_/:}̟,jQl>D=U&(8p:zTL|{kz LrO!$wV-.*Rb{lqB|JT8VC=:roݱICNゞ`GGnxgކlF.4M߷)=8p@Z|jp{#(yJ{ЯDžIXY"!H8OLEpyd<,#Z%HH*eg-]d]]D\uh!z=ťJpit9vJ+KP ]Wu/zZAm^jAܠwA*'++"41|<>^Ѻ=k5t&} V?R+.gutF 8ąis\%n9}o\pD@ *E-0$dU^Hq 70tAJ[6RƅA 8@l{a4Lvzb;,!@%͉0Teh͹0AgA7hɲ(JJ`PuARBJe*t,Ƚs.e/ae2 Ĺ,tbh-LˊXn{)΍whEgg>ꢳ]*bht OS58@g ;17hI ܙsś5<}xsCj/F__F\*Q%%YaE%X/0 oDP0 !3`OhWͣ~XZm5VQ ӫ?|-_1xT&PoJ^i 楓^h!Gd2@$581[>|.U-S0#_h(5]QM)& 4^c) L19٠Q*&D^ 0.9"4&=: 3EdH"/-e8W*ZD(y -Důn+s1HcVu[j\vF="Deb\)"3orh,Uq^M ȝ&{o< L]EFX&I21L 4HC)xerց:Z***f()! >NϚŋHJb$3TI$ sV~E% XMuFm8r2*mFo4g8nv$zu3Ә[sC{֜)VyB`&뤹Ţ%4wA:%IoQdXUh9)ҁ&YrA 9m!(}Y`sB&#U2V~XTFVl .$*}\d|^;O0`4a0] 7n$D)5*B%\f,5d&y2$Ɏnqh\h-'3`,d/`2F!E1qQ(ƒ`'MNH&ĺZĹb(Xjc[kZV`;F|"EBB`:J9hޖQ.lHR&Zì >:1dB\&dd G,X5rI_+jo[R_A]aXm|lkEe(`\>sHxQZecMH<!3@(E]3fH8!%r %/q3br n%I ee\%A|E&UGf\6J]xw&Ditxh'vP&3Dd6L,+ "pbmlvp&lf sFz~Z { LяK[?Jiiϧ5^vB 82>4.HՠAK;w\ew|^;rOC 4Fb2QqN\ (PZ}we)>ȑ0'Л i> J|eETV `d k]ާF\Iԟ[iNߝ^Fc.Y0S#Cߍ^WMځ99CW>dkz2SDUTj @Ĕ鶩Fyn%rw=|?8\;'@3m% &z/Jlc*dXhrvefb;H"\]J9:n>oMVk?b \ t9Zt>i%+t12]X$<('dX !*1*K̀CI(NM{NRHRH,VBHRxHU~%^YhxՋ!oB'yEwLQrn[+ׯ|7fx3,aA\R\Qqyh&H\wAy&c -PJYOc+iE ^sYdzȺFDgM,)cݠ#"1BZML Ng}T',&A)8:RVg 3tz>ݧ;kChL8@QX >"2( WWN"[wD閕L; ASHSXP,,'&H޽9$Q,"WZΐU-ձQsޮ)43 Y]":a3e:"s&?@Y Rʭz:O:xF2< !YO HcR&gHK.g.tltSqswQ5hkQ{3|&wBTLEܥ${^H.RjsRswV{oh*JiäFliKVbBiBݽ:˓a|tvOVCqτh\1b*u;Pd!ʪsgEY { 7RLq)tsCC#ׯ3m6.puE[g{1 tN2LdN95;CG+/xbpY+Iq8 (Eh gO5=/8ف6霌T20A+ϣ# N[jLɢtki~٭K6bgNU_/v07A7=5 g9kLD^ab%iNGwAE&wP*;Co߂% ˗<]pwm '8+(v~Le ix49/fiR9)|@ttFUq4UnTb:& ]щhcΊo[끵Miz|B:]z ewN51  t΄N5q6};IG.;ŪD$yG4R#(Nޠ۩ÍY&m@a%6Fl&dEȶXV m&5Z1\43}{|г4gzI_!}wZygxv`h)M=cbEbTn5E&22*C kr:8@q'X"@$hRس$Wg)C(.HQwLi’`Aprkxr=嬔ހ u|mmud&uن(uafP B3**$M9^mZ_<}OiB')c10T2y⌁pkRRSxcIaQTD2VcsRNz#+9L(aX21qr"*b- +q1YO2HO,TklMUInMJrg'XWwk{+N q X8-* NAI;i uol"Xd:Y[%rAWV!I\%zcQ0Dj9B(n .ep^<2Vsc+X0 KyF,o7g&E٤˴~~R(CBlr!gLyތ*jFOfBxQ̧a%LВz"+'0Zuf-Uޭm/R:70.-EB u [> ں*eT*E9D]wy^li!#DuwԟF W8XAt'L@@=.~ V>?ydgEuH842"dC E%F1[eq!FMd\J8scc IgLT%gSG d9Vt {cpvȞPjqH[^peNvE7\ȩ- %zyU > ڼ5[ ГQq/nFMA 1J \*Ra}')=SlyإBE!\Њ^daZ0bR C2k-0PԐHQ8l6Q_mOKߚOw_+χ]vw~KwqU .U 0{鶬;ggmH=:\mq`w:}|څz<}ntk}An6/B(&6۶w{Uc[ϭevtj;x,;Ė,պnﳞ:Z!۪6W0}:ZUёr~޳M -$\GƴclR'uhIZ~rx8DBХ{мe°HLēfQ * j9A4̊Ĥtr] Zzs:VZfk/wsO]xsaM:r#6p"H 1]z9>e(anw< %?|˸P-% dٙ1 au\wu+gi ʀgTA!n^Q8{Du:6aj-ICmHN|фk\s>dTlG/H"jo;+ ^]Ҩw6FL6׹>A =k; v")`i^u "O-'6ॏ 2i‰95I+ЧU[GV} $>%kza빪OJ9 =L,@)PZ Hg*pMW*"Zv$U(*ô (QCt+m25j9]e2J!{zte@;DW ]et2ZNWP՛+ˏD(0]),]a(e=]=`~nXr3; n&2./(|h/(nʸ_[UY~wg/Ԑz\?;`?q0sIckZfirc!JE1Nq}l\j\s\EؔKgv6/w V峬EjWDFzM$FETX\&rǤ''M\; ytȕN m)8֨ؐtڦs I3g3_Wl;W$Y!8 R\JCXy$=)'I>B;XͲLQTBxb 'Iyp.AVx=5Bu*J໵y_2XĀkɯ ?݇!.; fꚐ$÷02`C;cZ@讘2Zm7-dm{pӛ^Ŵ\;EWHw*e+tъ[B35߈]q)%7 Tg*5B2J{zt% ;DWXqp ]eҴ2JMzzt%aw03t2hl;]e׮#])kg*ռ+thovQt. Ni*63tpa0tQzz?tJk%*V3tjh ڎ+.92 ` peg fNr:]e? R{NHtu`/wu\sa RnZ&+ձSO [ ZNֺ#?h+ϫtqIˆu0'%ë 5?MlS]蟿B-^>\܅Ub?NaR"2Bx /TpT*zr1{r3 t*h1o>^3 Jk?~.RgY}8 ߖ9)||?ԥ^eo&DStՑsko6*=G [L;9zp$˓^A448/9b$>X$u&$`%k76( C}$D"PԔd86gw_?\re9v']3;# 8 ˀ Lۓ|ݶѪ}*h"AH !D##,ɐ<ᐢ!i)Q,twUQ2Nr-xbzZ~*Y(rEK"/k_U1W?V|YI" R>#P, 2-`gud) c v9.)4״JCO&WE 5{M˹#7[% ^t_UӋ˺vkz_~ҋ='>=Sj5HFY] R㘉2*'>s&Let3&rpV Siv~:%i/,}6O'+xW9s*5ж|[)it6)@}>I+[gb98LhtiVѳ7~17;8|O4+o @mPRނWo`@\!>l@~SstX ӻ,~.бrv_vwb<%Ke mPkAUp,h !7`u w-}[DlU4?-prMD bJ0ṆAͽv,P)Ц)c)ET\b Rb6sB25T6V#gGمl~ Ƨ o݁_7C|9k=kwp|wtbz}3`nۅwz2h5sH{;$|Lq36w]z毉 rxn-Ժ͢i8-lמl; lպ{VWw޹Za6loywtͻyҐvz0xtK]9ék_4l1Hv!O(>`8 ua{TN:}]stGͭtxW}O54u7{8Q$ {49YUade!xJcP2Eb>&mEgdsk:̝W̺֘ s&VL%PFctA tV)K0&2Z/0 o]d&0lFΎBhɗ xϛetDVΦ屿4Ƈs[T4D Mཌ2lYk 1a< "Z 2Z%KHt<¡6^yu2pNDC M%[@Jf\şyI#̛~S^OsBdKC jݞOQ1Wv/x'+x & gt10M$od-0kyI+9H:uge;[#K'L5K,b B491uIprU4ZqQ1<o:]G P6 Nq;K݉j]]Ѝ칫.GI I. )FAyV2ž#9!x &{yvWU=\ p$lOJœxqJyf۬FI=ouf_\μWh~CRowPr%Õm˻oG^]oԪ`W-t_|gqu+SF +.`N֞8r] ;GkGGGjN ?s1 MM !FdXf؟m|kGH'PN/U.itz:/&iᝳ~wkZ1Mb%hp]= Z3d|_{yb+U |ߛ B ̙8o>Ts9D^}=eſz1`~_;ylq8E\KK*T> ȝN\wurLg:ipF5<0I]|2(gv!kt[|Lإ<#}]|کkHYS 5lm澶dȥ77l\@RP,363%\Ne%]d'CKS|uT` }ݏaA yBlYڏ\gp|_V3vXj{Ryh&h)3Le-Rv˾F*L._}҅6L.c") ZIHЮ͓bWzOʆ ގ~^\jEo;,'jyw8[g..gB~RqUU$!zWI^^h5N.gW\'˳Q՟zjX)6պwiD94#4H S^B(,wwC&1G@Kb`(ݠn*UoM$PpWZl3%miAV*j#^ $Ȫ:3#M+gR$`+BFR8-ĝ*v-Y /\3AdT!8%$xYϪ#|3`ӆZva!xVfmb(e)|"H2s +##`]&LrYGs! DK5\;9Md73}iBA"<|yS C!.ХB@7wuՇq dHս'_}{ztVܠW #i3SO$n?/f_ߴs9S ,A)WMt*6xhH$)0DB7Q(+$E)V>p]pP't>p|:Ǩ@鈤P`,E#seZ]rMvq>]õv{K 24 c?/ր~3`! fW6^Tǫ7J(eLπڃ5F3+j`mh4RF;k3J@0@5,?" ˏ:&CpwNjm8qaڠr`Y$A,Y:#=i{ҥjâzOج|x>hmHK !2.\[  t7B qȋ.@]grԡg/SM}* ݧ6fe]Iwu\s%y2 LU lx^@K۷wM)jݨ cLm" s8W)Ϳ9^zHE$2{QEI02#fBr9[뀑 M$SI)Y7/3vTX)q!X)·JJIho XaYT3"gGj*K=gZ~S6kѠ/,Ƚ6pB pIr%xf4$HB/_[taTKgib$a9(UfW:xgh pUf(V~lT9 L(gTfƬ,ZuPeǘ:1Jp鞋"Y&smHBBE8&bIa c=`RD)pU mQ#[ӆoO| McrydfU^Ǻ5ti+ ȀpچF2 5`E|#hZ;J:/,tM]7HFD6%I '/Ђn.4WRpAmy8SGjZ_;o___x^[LSC :/~O_l u1 :LG?чbVL~ e5ѐk}[-~=;Q %3GcsRppOH௔+1yoRI1n2!^L¹/_[Ҫ"E,:[ >.z@.yXv-vVk#x 'C=* FA_| wHv^B}Gc"4~8vzr_8g:zKt(-귣Oƴ|hvP?|߾7 //\(85g?,έv#H4]~i *>av$ Gv0z6Otav E_V'z~̳NǏ䶣n+8#5ִ||y]W16^FL/uؗZ,lxyT]}'ѻ7 ӫw߽z;|_OtnBuʶI` 8D/?zhM[Z9J9 qECr/.mٳj}m/ ގ~լl^SU?лk.c?j{^}de~h{6U˯hUsUUUu'#_CFx~~٥tM'6e$ /gA/:T^&ˬi+Ec!u%@ЕZd'цkuwj ;+KU`H&\[ZE> *4{u#BhꞳRP';gsgG;lМ3]Nfw5ծ7Ri焅VG]-?Nzm"7E7.s6m o JPW* Z<]^I^oW[z9Y6]^㚎Qe򕿟߾?իelÔ0zFQC%jv-+&ɈV莼ԫ8з7{-6-ׂ7?j/ TA1VJc,nƜ_4d#bϤƃ itp?ܕ#wjWǗ]nޝĚtllrE -l|Q9C윚y&.|M5e5PKPԹu%k2\6P׸"1[]F D#+W"7)I z Ij~b3^k>ƱwV_qvwV--_< V@uj/nunU4eYx^_|{07PȬcی ٔc[T*D67|7}z)>+ДF"OVaDDl?6SkiK_`=>8fX,M(X{7Uxɷq+.J+PMj259X$ zzw1I!60'ra~'S0Eq5;/5c[U6=x2BP b(=ZU 1p.DxFxF8GZҍ(MBᥖj 8nkn *tNk4ˑ8tnxhP,= koys dO^^ƒy10MǕ6tlV( mYƓEQkBlR-ҢU4VV^+UЍSo /eU.+a!0IpUjor~m@2K<#.m=ulZ%}lu\ߵ/$~6}|ӚiS[rWc5RSJ8YL|mʀuQ@n+kPJDYamwW2gnJrtA7W7 $'Tܿ֍/%yߵk,l Uh,TdM؟ {[{pݼ?'eZ--/u}~7 >7ώGƺ"ߒ\JES!Sy]77ZFAVnq~j ,Mq+J_ۆ|;|özTb]Z}:YbBVǾԆ>LlnYyi^r ZdaYz[X5%9qs׺Zцt\\\*"qE*QZCĕB XMǺb^+j z\J2+㕗"!\`k2\"~\ ;j8 uEc]tAR+!eqXdpErHWVq*1;Wvǩ!ԓWwU'qMU'ZછJYv͸{꽀 ]vGbV3U*:L'teYث.!#?]:.gr%h3vDl*~hsԨ# l_Q~w|%f@j{Zw=oYku ,S3*L^T|8szo5t;hf3qXϲnrՋqlX ?.uǍԜ-KA9*6hys)}b.mP% ǨD=VrN:xld oUڼ}tmtz#ٸM4^X!GBO 5wAF5M9XQj`"h>`NF &i1䒷JhZ{hUz̡$8HԘ H XrUȸ  płgHdpjOJ3+dJϙHW$ +f,6v\MW`HW$تtprIWW"Nej2u +dpjz"*j 2\\L1;XWĕ^XH0!XJƺb:zUP q啔&+{5*\Z}"Dq|pvz7 8}vO\Z-NaJYu˸wAM*l~=OHt95øܷ?I Yҩ`;YĴD@ kHǪdҦ+VEb2+HNeB"&!"+Vk1v\ʍ5q5\&o'dpWHdu5D\ 2\\ XWVq*ckqu\M( v *\ZTz!j "P; >yNr%Tpj1'rppp H3HdprOMm 2'2 WޠƔHĮXJƺb:XVis8~"E}DN 0AZ\G~&=ջd;ɱ%á i&ȎTn:BҮDB㓷Nۇ.$טv}pBq}껴o tz^7rr']=frLLgNkb]m ֙;5hrʬC#Uc]S!R*6c)jJ% VݫRy`t8.Z}Н\}"!TĘc*︠M&urmN11vsF[K!CMP![%ʵ`0fnڠ5WzNRlhѵb$D.ggTrwkK͆Vi#Y O^WUGYz֗TگhS,81#3P2d]șcq% H/*6BZZ㣝3Xrx 6{KFLdqzUb>iiֺX OA!Օkʧh(8S'_c 37%bfC)`0)y=kjA 1 ٞ6TuGݎ@ڨQ/3^*ҡ`eH9g3jSVc{5`Qn<)p-AnfiD9qnՂ!Wˮć BLtq EW`17uV4K+Bi,ag+3\7e[(˸`(JC51*X)ȀJvTZ[E(!N=RcsO-l鶒t5T@Ԭf7VH`W  \=:, x\J}S:NL௒2 *(_XvLB19wׁebd5˺2j3tWh%Wf AX[z`i\!"2,f -FQC(d֜ s8f`&b\Ю` 7Qࠤ`gWT:ũ2Qځ7J  ڛk(lD\`( I@fYV(l Nj@!O9CzCv\F)^eh 1ҸqϨ`[QO1($3'7~Ah8Pd 6ʒNC{2l {xWZU5Ht@gm' XDvߋycŬsDq"p pl^A0&D̿hsY; lߙxb*պ]McYsJ3xx3[e nCja&a-7 .Fy:U(rt%C"ZCUF2rx CMg :#/cȃVsHV2O֐0͋cA$>yt_6 5DnG-ۀ8>,TɂN5~T}y E*U md-Za$BTd`M1v?W%Ѽ d}Eʘ`d̾ ^{@#AKzv}n&z1΁6DRD-10=Sa4B~&X1z*a3m>%(ZTR;V@r]Zx _3`y&˽eGI}lyD0j܊ 2e#kvHq5 uf XBiv%&P?zP*ZhwEΚ תџ ÄYiXYwcF:覑gUW(!c9Qb%.l^ n VY{IC52 h2+io- k꭪*S?,edԿYn%F  W|ңJpE`j|1dkv6mˇr+.ө >Oi51 e,aU[.n[Ci'EEup0Ns˿_ ΝTeVۆnMŌZS ΓFɃk6vPN =><64#dba@aRB^:(ٚbC\( m͚\X;oKC!b!C1+%Ikr)8$C(.H,(jI- UX mC' !C 9L0P5w[F违۽b xp"L $Okv-Tvӳᄏ~P@h,M `nC7_~yjX{qηbnqQ8bIrw&ƃ鴶`@o7zܜd5n\yOD72^\j5δ^vu?ܽbgݧ6[77Wcƛжf⇶^h㦭f㧶@^Yߵ?1=UA^c6x#\KGg'ЋtٱA@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N@8Bgp9C @(N $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zNexTሜ@h@6ҳwAv q@THE'N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'3w:聮>ru' N/ N/ˢy9 @$NG; 88 9葮zAN߹ %9~'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zNֳޭ|xj:Z^nmVW}rq}~ ̙1Q:¥1.!Z޸(Ÿ2KzoEr;Ͱ/V拟79xy1J9ktԶlX$:8W,9iYykBJ.n|+r_=]vaLWv&Lᛧm.Lf^|{Ó1;7z93FTjgZWO9~O`'!r|Pw>8%XIj>xgUM3SRc;-&~dҵ]M/)ڧaH4 ٧C~8K+jLϓOEFآ0Q)e8瘨 5Q:5}m@籖&O %|g>an^;5Ah/wV@6۞ŸoEʞX:Qg/,.C+FqؽNZO,=QD(:}m~Via,-p;DGW#\{4jDՈ2W]iG+Dc?fG50/c Bj$E,Q,-ͣEl"^""32Sph sB5 Qn8]ea1Xe-th܈!Е69DB ˡ+-Rm:-[)ZCWZVm+@K{t(%ҕJjH@tp8!…D]:]JmIGW/"EC=C\|TvtЕ85Qhuv: %^zs]q(ŅYW]1va Ps#5n,ƙzg4V"Fƴ-4h5fwJKi,UEt8)BLd>]@/JWµ-thїNWꎮ^ ] -+|5-thtc.h Do]`ik g ўkq(m DRH٦QH{+ZCWV\P;TuJ*e+h{fm+D.~ Q +caEtm{@;֚+D+/޺B.kGW/\iGW=9ǡM/qzD[h9\>Ŝcrd˨1sT4!}@0%kQp7þpB Qr +FP-+n-5tpn ]!ZE.m|+!(Dz8~$Aybo%vKY+nڇ8@~T~Kݿ֙`]!VG:1aMa5 eUxӫ?ԒXO~6b ߯Dn)6tן1ip63'\B&!!Q)&̝:|L! V oB)؃*n7WP>@Tz J =2, l]uzO",> J^ nپ@P3os2~m0"uNt0A?XE_THګNJ~W]o[>d膃RܕpM~ Ha7UU+y5pCz'HiyZ;Ήo{s5H՚qx~އ7U*/sUf ,񮊨 5iw{uMV@YڔAuѦb|[M^i6֥>:iRhM`Nd&O`$cvNfk>}|\0)5)@IyQpBf22rT8jKDM$*\&!ҞGlE U/z P~fOZ xT. LY$4?-oF`"/l/u@~?_ݛ/ssJ~U^&(?NftomA6,Y ڙBoM s7o8hhC(磪y\˖dP@* _~A#ddP$er#< #`_qG 14/xGCNIVf "A*Y cgR@\ND1SfZs&5 (ȝS)R e' abh ~b/1<ߞ-IK~Q2n{nw@dzAiߍ )~GyFtN1_9(. ^'(eU~hm6f7աMS}Qo13U ,\nv\.fFEgXJ!i9Hl̘TL F3CGՎFCQ'χi0kQ R ?TPj^(ؓ o#p&@t>>#\ ņ =NknJ]qųHM>|QQAO7 fb6 J}k"6 nam__MCN LY@Uj5U2uMUT=k0y:ot?q=J1+\gD9:% ),1Y;5D5BzC~Wi6<>tƃbr@åB‹ɭ+7ց8'*ѱD}OS iX ,xk)Y`˵9+) 4w *"[:S1S+̆I+)>JXmʜ#p")#q9aHCN+rt{G#qQS{xS{\?od WMmbJ"8⤍eZ%S ^Zeݍe68"T`, 4ty)X9UL·\{xd(I$3<530VH:Ψ;AOD(cea1{c zXznӨ7th1UuiO]Mh?v}nB{``BiWa7e9ye % /kX%.jWEVNA fx6j> P`?9*1; ӫ@uZ{S^eO.Y'Z  sJ#t "pCaQKDQXt'> 6ݖpq#$;,b[ =.g9l ݿLٔ?j+)k/JuPWA{NA Ñ@#5wV$=0X*7Y,QpO@t"KgQAi:j18%xt<]L{_{vS_yg,&p7OـHT%#6@>&'uc9vzȔ 5Fs-\Dc#bQyI4>GE$ިݩSr))!\]oGWX`GCq9up X`U-DQHV_ IQԔ(i G5=_UWQq!=W e2BlƮ;5&gmŒn|8{!+9}%~_+w2+g'JZFHbzkx>.STk߬9t_q mC[u҇ڞ(&W:exJVdYޑJݡ&`l9魪ɷєEo.uq]#Y=x4_Vf'p-f Tʑ,B4tG2>X%2@ЮHu}*Π#tD*8!r>(u[a^(LPVeBk;!F) 'wlxr vw?]ݥ![*Jnifo? >З<4U]؎6AAÏIAFvbpZ#^~ߦktwPÅrLi0eal'u59{W3;셸g9hKUލ;Mt#چ~μoO|w>4=n%saQ|V˾] F''m-w~blYK/mI[:^֌Xތ^B-褵Oԃ,~2^:z|䮗_N[VRvqY) kR1N򠘍]':ȿGPz!GQwzO?:C~υ?f?QL0D"}Hho~ZitYҩ5[O;|u[Kڽ1|Zhuf|m?nSY弘+?er$bG0-$6 .p>=3BɶGZO~)~*M  +@*|ҖpB*׸Sc9$EZ42c9J9AgA7hɲ(JJ`2U!d" \/~ 1QNlܔ"UqUj~ZwH&!ALmc=6*Jc8> p3ab2A[(nsD {QXpCipɳy"+S?mѿS(BEJ4/Yӗ?|ݒ:5Qb:6c@&x7%d/4ICLɐ#V2 '58Ƿ[g[Aw* .0)6L+Caֶj9\mY RY5!#d^c\1m@ݞelC*ZD(y -D%n_ʤpF6ڞm6Eeז`<Lsaqk$mhEek3)iH"L:VGkSՁY/ʞ[nhdPg'ߚ/O" +$J QKj5qVT<8kR uʬmC'C_D%5еƳz_Νb5Ue:\,&:&%5MIseg$M\4XL*x"{ǪZFlȑ"hp%dz|!-dyrV"nd\HնYmajfPbmmFm"yAK*]w'_d8^MO Ǔn$-ɥԨl JlאZ1ˀ [$;Fšqucq&cRI>n2*Vx|;|.Iy vV-qV[x2s[v55yOyQLS1VU)ۢ傜 Iʄ^kZBU{G"!WEVdkY2K#P,4Bh9ZVg=lD1b}-lEo{xsECsl -6$rjxD)$w1CQl,P };!&РVK-YjS=JpEW. fDrPlNA R5(iac_{iCoy#6<_jWeفUϼa?di.9^YK>BMֽq\a6M!W,[D%x,TD Bxb֫ sɬS28'ΚYn6pҊh`%<gH4 *LdN-p!ҋTƝY#IuJpz24@ˬӗ!NKgw1B}/Z޴?߾b{咷o~#y8C?r|藲I $xkI+fy|r!'4.Z $Yie1 oY;!^g}m^I|i`&@#ňt:j0o]^Y N:wI0Cpk?wڻ߶9x{3yBfjvvХ=Ѹ3{?ʇgGo)>kOݝif㹨g' +9כ<;oƸfHoFrh}0QsLڲ3PƇxn"DrmO@5{<:ŃlPd5LS*תhQHCܔ,ĠtTYU bnxY8 B9dyHCn#ohKJmw"DxD3r%*b֥c=3'8lcGٽhcpY+Iq8 (Eh gҿkQGtNF*P Q-&dQ:ɵtlnU&(B܃RhF2]+$ET8ij-cwϷ:j:6N>Z07Ν3dꍋӌ2˜@LD^0IIBȤӁ4£׉ "co;Eo/aC3}iXRrmU1x, ܂o{)ohDO8luˡVTNJA 挪4mܨ|uMlѢ8:#ʻ/WMź=Z=[Wzo;LR' JQF t΄BÍ;+3e-+qm׿oppHZ+q0<) |`Bו) mዝɭ{g,ݷhteF?#75ۘ62-5/ܳx#jks@%b͕uxS*ԩ 7El CzM^?Rc璗T[TT}^kKٻ6vW_Om*yKv_9uKk䒔}4CŋhP$,[AgkwN""ϞWR**wpUԮ՛+s7W\ }֬Gi֚Yi/]'ipuynfMӬi5Mi4͚V4kfMY4fMӬi5MZӬi5b{[Fo Jm~CV׭uIvG#?iѿ23˟e4.WM#=4Gww\ѷ*@DO:+{-ϽPqצkٵ!k93kSڼ]=oZ=J7ӴzVOiA)i4rQiZ=Mi4ӴzVOyMaM:Mi4ӴzVOyTя׳emt_ҼS㉾ՠ7pNv v(:_4ם'Ɵe=+;>\:<0 *}mc R1"Qd/!K0@9T]Zs! =ȲXv}VdR:8rJ@%Lk`9.f1l3Lgv^ 42gJRtg6y!7 +u;tݨϓ4^arzI }8]ڎ|{ tc*}_J!Jt07(IUH+9?A` >"Xq / /DC.s8+NΉH#HS܂ BlB=i4qCy_Wϥ;ݕGQւMʤK[+ yi7i & ]`|FF,vBOC dR+.ޱhPS<G1x&SJ #goEжa|䂉:{\Nl2eZXAlP1(>B){GV>Y ۽|_d,,LWg<0Mo̿{:OmvW?q|Wrdi$!KsaL7?-yY:R8AϦOR6kLngL=͕戠:x4Η}~|ߛdD 7aGDgNܐG5;V#Vm?)&pI.S&F}Ϧ6IbwW]j3\mo'><4xS\q{6_[$r}ڒGο$Rd~r+tL0rs&;6`pWPן_2oitlV|g_~]= =ݜpfℤ՟=4+Owh>J%{C="xB_QE<[eUgUGqs.V|*{Sb 6 (bv%-]d'CEe%ۜ|66*l9r =UE aZ&lΩ$s-H c!eg.[L2]m;nDgP+ Us MNk[4\g|Zz1:t4'7[Loa=a]$ʤ58h"EJ0FƘX'E^H|htX~<U+yptBg;r+3LqOQsLr+ZπѬksȉ;Ul-Y /\3ƒ vu` g9zpZWq!xVfz.i8M/ Qxc"H(f`Ox$Xls.??8LhLs!RDCQ h[s6)K) +\EeEZըVX[⤍^ÔQ^d>c$ \%ÊX@rpTګ6&Nړy |ES+LbJ*:fF΀fn5II&8 ErG&붖YqElЃ^\U":|##N%@PyDVJ\)R=džm׹6BN_;{\]5x9Ԥl_Aʰ]3'%dВM^(jMdCyB:,>nUR Χ x+8WHLj,dV8JT?zS(="ܥ'+R|)2sZE…[LsFdy8U =Qye *dgm9OU~/ cg(i*G4*35}UԮ|:}nrsއ3Gz6|LMvpbcjt\/_% /D6'!bEVjMzdYQSiT/DwܞYK5.͗XI:#uRx=g84zp;>QԳq)OciXZY?{uy.έqҲoukThOPыt(8K'.ňJB8Xc?$|8_:Eݓ¼L#gޝ+ކb0U|VZN yj2',tJeD ɼ3O'f.+(^@扶?T@\kAQJgH B$3hYN rN5Fif[씌1dzET\b Rb6s#'>^hJyjK!2}!!#,@O#(,Fl("w9%+0[+ R"WaFkGBܐdN( I07#\9"Y 94/"0~Q)KɩB9N=p8{#uhȹҘ\b;Ń}Ob=wHkYGMZN*G&yh_8)!&[S`1fHM.fAOXiGAwt9+)n$pdX͜XTj#$,$*coXxv1 0UW{A /vUb[@%:) Iڦ&xYk <,!Il y(LAp*D$&Ÿ(4*Z1sNʛAu9b^cq*j}joC`Ei^DKXQ&'jtV\J-\PYR䵎ZX6=:E (%#pq$_Ƭ#)ke%g0bp*dɅ@PV*'+#b5s#ⷫD'm4e[BXxlOl !1:%FɐbMyH"E-iز]zutqܔ%[gkdSZtOtqՉO1(U:Y@*ڑ %!S2QDP`Ep@tǠB[;C2=ܰ#{U2\+_ snk8kQޏ8A&:{LG#3CW̗LhS*CY(pODAq.'9GCx'#IR@eo s D2]ShU72N;ƽ38H @/hl[n 5bV*KNf.\ mn>7-ν^y )FkU ৖0O'qeU^*xg^˫$$,S]$-Iϒ* #v \%` AU#,59>7 VK?&Q\p}vG^&{;(&2(@F),0&g /3ʒA^B< q &AxwMm+Cj wT?2ml^Qq6U͏0 lT?o(mQ64"+ʹS3a\G+ˣWͷ)kZr3PR_$iAE֕h\2Mu⥔<"!fݏwcq;wһnt Z)EsjFFjM+S!:I1*"VG>഑oEyԍ'Kn-2;bsJ='Zē ;4R0EF::jhHQiĪ*c"{&CD{5C€Hnb΂wDðqe$n x*u.) >dN1'S@}cW_Iи{_4~ԢdzzL6&(í &lpcYY_f*I+PQa9Έ2"=3[ U}{|r;<]Oy]XnZyyn-l`Kў[i- AR2yTRJ  VB)VeQ@؄ A2 ǨUڥ*FYJy>%0~ .^#߶ghk଩`OUꋭ>8=Z- =K`'VR'05#ҁ 8eI@,-4@$:<*}7*4`4;"E RrHd 3âҌ;$0"@ (O _; @MMȺk#4w;UTQF0Ny$c;~k|Ԥx1î IGiNw3硨YUf )Cִͅ^v{Q>7d' v(0N]!: tGLDrmͧL:jĄ)D4t1vjC6ߞ8$) l'FKg)d&FX,̩n]T͓>fSO՛Yͼ`vL0랞UkKlmDErҿ7]^3Dґ{eÐ?,i}2}L߻nz<{ѝ_>9VLr٨ʘu9B#i`_K]_C1E"GouتN_κu38rtOO^p:y7'޿XIp24@m߼~ݡ񶆆񚡩bk Miq1CmNyɸЇq2uSҏ^إ[&7 ׬C \+6E"CPI*w3 QcOr _@E}M]{HDpR?ڦp#y+u+I 9+Ma>pid,FɃHpoz,Ɓ;=+IOmmhh^?hqQY z#S.* G+%v'RZL([r/uar9ܕ$ SnRc*7&G RڧƤOryfwԩ>ZQ 4LO^6LJJ-Ѣ2y2;ܵh[evۛK(5*Cz Y@SJx- (˒.dd L]R|Y`\|6*< b8Y9,o"g*߽m}7?K$+j}fWmVG7j9T|9aڷ%vmfI"a뒖aKҴ^ "7P*W"*\hX4H%K]) >F+H>~G QF2HEe/HNQ$q`Iꑑڴ[F%r6ȹQjS}+cZ)"V?"D48[B z\n~SLieJ,ԡ/O,ע1GdzAyJ_ \_J/v>ҎJPJPlNAvD"Y nc /3I=;]e?e'M|'F!d@5%'8ҲT~x*!)da=_! 9y2YdAd~Wv|S=(w|ݙ}6l:38(ԠaO180(gƵB^F3iDAೖq 6㭖@ߢ1Ou}7 PQfuC:Yo^\̩ k୙xJnC`)*b*LyGm rKFDP2) +(hwCtV!n%3W6Ȍ>ӸSH9YNQ!ow Z_`o6)Iy7UY:o 9J0*15Lmo[ۖ-sDH%AKlkxƐD)ဧ8|^X5W6zʃ&vT<SԗUX-O띱Vc&yegDk45[!--ۖ[gM/溈g`Un6+:>m}glk.~q* WKfCs5>0F%O˪a9{:';K׮^!,eSvnW-ﭞgV>y0 V׼]3:g~ЏWWs91Y9fMuCr͓V汮sRW2kng4ڧp`ZS_hԿPM}nۊ"s5h2I@:e FRDKa\G s0FVW飼wdK& 5lc8In R o3O ;hM= ψ(̂Xcթ6Q;b.FBY0 33W`2\YSFq(E<D"håD4 Nv(! 9ZnK[z5Iޝ[L;%se=pG0)6aLe®H3{` ㍰!%k h&v`49{c)Sύ(Yq@vi$N%X8Ɗȉ{u)Okeҹ?aOng/sS$"1k*uV@01Д Z=ַ;Yeg>#hgו?0tP~lI^j'^~gڸeȞ?otӞ5ӞRoK'bf>?K :yqC4J` :))}*Ɨe/>qԳnPc&p1|5z)Xj{&0p8 'hIL2)?eo>m4נ?B>3DUoe )(O8YLK]gzp0K>Y勺٠_TLdQ| X.5A!0pmɩ* a)y)~=4z6ywM=;^Q.mU(HDYz]w=Wɑ3Wx_˫} xKcos/Gi)689(U.t>~n@5 Lr6㧘T;Ɏy`8L[˕Ѓu/ɔy u!eZ*gX.-qGpE!ZKޚ!G$|9G$LhX+Ϡ_e"9\cqW2W % u$J0&Q(*n '!]Jf2=3Zw]Rk:(A7NkL~6Turr 7lb2AJ%RX->L0ujQnshA#% V!Π_? &ԁ3ki6]* 6JʥkPE7r9%NAslɫϬ/vd)Sh"( : / E&X Q+E9i%V^:.;UaFrmշ(ABR@2{nSZMgˑ\&D̞0pFNj. (_3V|c[@l2xu =6>*!* KJX s$aCԣzت .hG3 V;Bc_$ٴ:քd P˥kPz QN C%0hUWf!htn=O?6|X7W\#Лُx42Ce^- h'o ONa?'xxJrvJn8?}zdx.z/Ɣ10E3[i]Vzr ֥gҝ|l8^>C>P}kb/WPnťWk.|WsGo4:8?qg~ۖ ,?hNN'ӹ=F~~4i|v:\)l*Y {#9 Fӓ.?_^+ԃEw58CHv{EFD0{[{ئ{mC/2cڻ„4AxEJes$ XMhȂܶ(6F* &( H[04I$ TT(gХfݹ{FЯ}[_|~ڇxn+蝱!R{H 2; J tNZ 1)h+ Rt0rjA_iqs;N8v!&c (J^뚾)^Y(GA-#HQBw*܃Vp^//,;<["ބ^~ ݝE62,^FJG*h);J|]'wwmkݦ M>b䮡DҀE_k]' tN),Kun g}z|zr2+벙V瞭~=gس&[!|hDȲ7lnMgI؍MEë.{؀G iSf) !C[}?fv+lZT}iB`lGDU|uBrFkVqؚ ج)(4TO04)$ɐb#jm0JP4ͺI.–_g\۰wc$/Ԭhƾu0 ثfv xaeszǵi݇1r>(;kRNX cSg0]tA@ai ˮp瀬! +wmQ6<^ qT]HJE-ţ(/z-l.!ZE->qYxs|t2͏b%Վ^r:i0)$DYCPƺ҉*LmCޢ +5>֔k*Ruup{2ĮO:+Tz K W޿F5ؖ 9Dဤsڮ8b%) m+|b³%`&)VK}R㐚#HhP@e=i:DR)588чTG Lǎ?r+K"CkV0^(ʾ>Nk>˕';IH6Uػc/Pu&4߮ ƃ^QMğI$0x!2$bJ|T˪"EzcJN{ O"I\F~Ƞ %1qT hke r^0Zòe"Mfh7(R[Eچ]?S2GdwIziLu U T*pP,;1%>ST:QeuÁ8fzn:Kx`90MRa0l]fF?N24Zgի&AJb/Q<;,Z Y羦@NL %#YN{8qMglhŹ B`_Tb"\_θaUw9Q:g'odV3}gFl/[FwcetW)Fo?Uof^]fvPG2>zn>:\Id٬'{1M^ZF[:^׌ZߌYB +mIox(V/~2/zz춗3_N[]V׽+.sRGjÆ/'>lJ2fc_|_~P=ԲLc(}x'6G߽woϯ͡T:|8QԽt 'M6ʹj[6=lд-z1w+A|5~>.Gir{g3MS]<_x+?uk7.O-mV9t{rNªW8&ī_BteG~0 WW>j[i]R)$U1@'R{|6ӫ% sp3QD 2qJatT&n>|ɽ/o黪Jk'r }0;]0u!Kk \bi%>R괐|BHBAo\n\*`t4HـN6s>-3Zp`Mva(aCC0:s$.Myig~K4dpy@LT| 8q9&{CLn'&pc;""- aPF&]:rx,6[N;nȚRE/Q u c *"f,OaLxYw6En5ٻDrogG0}>>9 z5%Qd Sԭ,Tv>gsmZTs+-k@ LmdG*DYIҦޮ+"|jͺsӾ R:Okѽ绒LNCJZ䡛mY(&~ؚNlZ}sL}k#xR9.E{ \MSMFMJ:xIn>cb1G/ZF]ɹD hPq\.!*/Qj6R-cmafdle WՀ0wUdz~'{'|o'G_-6ı%!lR+!MbC1" cH![d;Ƹœm[[{]8aVBǨfcd!ZN v‡R[ quVm$_ 1wj7mliW`H|(-aD$bRArpUqäu`L6F-'YGo$pPGьۚL"YE']moIr+>%κ[ {"faѯbIԒlo!)), gj~^8ZJP;066f~gxe;\_Ƭ~Q'y;q~NZcw+"Qf;)+k֋^2#%W{=8o;90 }o4/0dC"OC燮?v0="Uᕪdž9٦o{3thX bh돓 |O~y4k!qhTG|Emo7?x6?NnEojY9|n__ ~f{v 1]VìUvVbf$[C;f-wӬ(Y5i@ MT(E9 Ehlv2T-EL-C[=RmB`x[@'c$ ͂*EUgv{3q4}5>?_[8#vD9SJ gCkU~]}Mo)Ƞ))£),֑rQ(y썴fX[$j:èZ;6hXsPs`01 Ɇ(mm^*XE7?^ʉW$4axI)7֫Cmu,ʒs hdQHk8j44Sx1mwa<wf}ߎR[ piyKLҧ,nq`G{bGʠwkw3Ѐ_;+ |gPĿe@r_rM.P%%dF'&z%'O XZ,t׋)!"L1ZKD4eWP`j)*4qrĘ8X׃˧OxGȑ{"\g=kr99T}C\L a2!: #3h*<'Y|b NGi=$IvP&;go߃A3hXSbGbW{`ɾ!cy^X6:3u)4aQQn1Av!f!9Ek)zAV A^ۑ`\3t5ӗ"m5 #>gQl4%ΫR[G2؜@H)ΥuGsg˝~Zzܿ[=B:#O5 /bd|=;4p}FvR;6D5F}zF ȵPvh\ ,t,*-Cw-T) WZPhј#+X {4pUŕGWUZW\F-ـ<"WU}g~Z p 5.TWU`KGWU}\FC*\Jrީ]vէFW~2ӒwogFLR7ݽHX>O.oBoVRvu6hw3ߨr[sh)`Y,׿MM:-~x{b%32?;+1 ޡwN@uw[xɱo.o'󤧻 '͟&~rG2Rb:Ջ/2i{'=bL7Ӽw\bG6O;O7r7yҙENwR;EwIIx'Ia}ҾX61B%XR*c#uQkEՃ3F$H“MK_UhD:b4AQ(aT)2$.MLK1Bw+T]UU(:*S|?h5Jk?֨, [1?NLwtyjsS"S#F/lP+UuU]MƟѮ?RZ~FT`. mHH)R9<-di0QXp. ݣ~^5$qhՐ*MϥkAzۓõ-ub'gC4PۛSuJ8$| 9ihަ~$9|3hReFfWjGhf?&hüښ0:fLڟwl4M -Zr A9 ];ĂZ2Ibefd^\ sX KD8Pt+\k Q ;B5T:!01ʀ!+)PS9s6(+lJ)DR"$|;NC,Vcy3qh<IUt;4[gt|lz#6ȶ%:#y_ɚr)BNŸ^9 LMFr`kcUamh0L\¶TΔBȶj%Ɨi<n6;6Emo7`VS+i/|&}x풣:qZ fuRS<,郭'ff(hJXFeea|HE'fSfxL7]"+0 "6"jj@Ye:R,wM1FR'@%BɭY#HW I`uTLH!Z&ImlIF$Z1"6 g8uHfdS\Ƹ.Ќw uSŬg;[G6+/ ) æA6" Y)SwfǦx wa5 ?:_kx ~ZKaO.k/%\;2' ;}xGOB]:4wvxIdiwE_ w("Q:&RxC2Q1 (!Vr JSм]-Lb2XۺZ]%ixoѤ.J))Nޅ5WaG6e9( t+q2C jq닫g[Uri?Y[:Y9K3wg5Ђ*+; 1__6kjvͫ/skP"cc.I5 &§9+tGkd0Y).[LǕuִ9j,gWzP$[OMZu_z֚6Ko>nl=ٮGԦ lMCMW*ަAk [F]a6~I}:VaP{l9i|.KH>EgW{8_>Toz5ywbyBR%/쟒gXKNH:eӍFKQ]9Xw( >`m=@^\h=I$I"1 # )b%VS0ɢ| 6gi'KWc0x5̲ߪ?=șa>`o-g@RG4p2;fN_>0z6*A%tКf٫W@!k)$; NU1::RLݑNå^Mg/P%P0 *(6`MHtHGpk K0nwVjgB11}d|\,ؔJ,QN۫S]_NN a۩4k∤XWN\*j2y.UӶ8|,~޼+_],ĀDdN̥2(pp~Q-Ջ]$ʊQլMo>PbDtkKo鲭 o2 F0bZG`ssmnd[m}eL.:MGs0a4)s|^MC6YRg߷*UlSx_Uez޾˻gwg߾?D7g߽0`Q8j#Ao&o!@o[MO6 ihi*CӔvz-Ζ-ف/Qztu2t2,O+!bլ|C:59Wl*`nR%%fsܥ/U" f:#<4}Y۞p.؊FhM-yX7m4ɣδ>8 b_"A>RĞGYmR wob_Ňbh7d]vwR?78N:`6GsZռ5 Jۋm5<jvюo>ȹY g]R2%<`0R9c6(L )&=t^417!=@@#㽱Bb̺D2+>K <5;X`Ξ={h[.ΨC_PzN3k"ԩhv˰uq~Б@)0W(^M]锧:$#+B.=l\Rr@qIIG᜞eBrkXEQzFjfFPAcGe҆p9oLvk}X DPIpxaR[E4(āP",RLp7:1댜p|7Ly1µ )g1Lph|dX<IҒh(# HŌsЪ |kMo9őp.uXE._K"2 S!|I:pbBG)3r$b<ƃP/Tҩ^x;=sMib_.xa_2ﰂbW޾ٓ}Ch˦t s-c1sAw`3tny$<,'V: mzoSxcHQTp;oB!gh՞2#`Hb`Z+I/ts6=;7힧k#dJT?tl!3W]zؖ+|u}Ĺ`@s,M.̱] 9Zr;Afx a໽gSTM!^)OtRC=A*NTz0%ff&90EN93X?p2{(r6>z7S%C\[J10 yEh{o`AH0ȮcE9&r6ȹM6udXDA/sGk\F)$c~xS P*Z8s1y6n=xr'k{2~Ϟ>ȉ`ɞLzDO%=2P8ȉJ܇}Eupi s0n<W_ M܊ds-D96}ԫ \ezQM=, E5 [I+IaGaLLd6KJTo5Z1f 4wFAبQ1N>:d9%x-zr<f2ݓ}Cx-_-6E7&{jr[|9 Ts8gNT1hp0aQCz˱{@-GFy*  %JR52O>6 m:AM bxgP:eS*zby͖p`7fY!̡d^sn[5/ o>ۡ楒!5o7o, σ[بYN-׶ˍ;wfBYY~StאY-2i[7t{JS}rm=Zpٽm>T,I<D9G_&S:BF9mrN0Ι2W~x*!)MoPK]-YD>|;QCƠ M,iIj$Ei, ,s+G s0F?~3rQzcis|5)? &U{ڢO{T;#gq㩱Aq'PJ~b3" $k{!#;bFxxf^1aA"`f+k)V\hyAR*ۈhC&;[yiJ?MɻvG空>Iw@ʺ |d`Rm"2(I7$]hg,.7@h(&O7g4]|^<7ͣR({QhgmF{K8`+"'6rqԁ3kn4O ߟ߈ӫ[xϚK=byqʔ_:aZJ! yCK}gRFKa]-2+8$0;-N`;.x*EwFՑ(a y!CLDQ:hpq"|֒8.{5~w5Qi5d亀k~,@ 4{[L&먨bIK|vHl2׾sS IS 4[Ï@hD4{?ljE1L23ˌ)I[绅.F 7jN-Vȗ0)O_}{|B04dvk 6a5*JEoM t: "?Oypaj|'|:ۃ&x8פ*B?^2J><Uq? ;p;]m*ѬS $_1Svi+dOO:A^*OK~XC^U#puy{x^ydEO2-{ULB@/Ӏ6WFÁ^ɏd)-m)<(' >jʷT X{#}vQW_GD@Yu[Rw$A~ s5 `V%Nzm4o3fK`M:AFScC] 1wo.7րbfuKi^gf}uOYVo^?{ƭX`[܌C@p76 th-XYR5rwCH˒<ƱEfC;@e&uSteϬyۀJåDt8h426bȜC2X:*fQP;k{t2/io;ޠ:XGlS8Ns\_N@ o"w܅B*1V"xI#:8~KkzLryE(+j3ri)Y'1BP I$RQ MSe+8 SyQr;=̬HrvfVdU5\bI2R 5 zyL3jgrfX]I(#X,Ofnbq yzQiYUy~jp?50l^7ɰoȞ|Ui1*MBpcv:x2zy('m= oH]QW\E]Ej%wuTcH]QW@.EPU}WWuGuuJzu$=ڌ`j#r 6>ڌʆ u6PWꨮzTזpkfKQ.e7kVS"}/{ֈե8]~YHNXxxٓl2;h`roxqNH*G믷 4WGr:5 6PS5 TJji'U$XQW@8T|Uӣz*F fH"컺*8GbVD7"XÉErH-]RzkD~]U$XJv(*R>vGuՕDCr`NH.G"bUjJG hYTaE]}WW@%'䨮H}YqU]QW\,E]EjǮJGf ge<8tx[]\{5&A%?w9MNIy3bV1H3osA rm˃w4I%0Y/~I~=G0wF0͞u66I?^:tmk!i6*)R+*-wQntr#X-Un?ܥopB93j@9h<^4c c2_L}4Zii()6#dY}Y*4h?KKyAE>ג)EL<hc,tx c̱|:Ƅis [ns'=Pt 3>E`̊9s˭:F *oV0K`gr4yr-cc\1UmZ[;UuSg[mZفpUw";Z'6XTsNZ+M|P4YJF~0lHl2MsgNlzd˓ک:ȉ﨣o˷6:xc-FDZf*h Lb NZN4+:ѥPb~ѶVw<|24JGUm4AW똭.=^_GGt}vzNg*:r:5Rt6 0y\eC:)׀'٦I~@;36uu%\sƖJqsQGq['&ݍv3iz}2,by :*cĖq-[VrEk+ aNom[6UtbQ`0(LZfQC[mCoͬGbid};o(~kI<-S*Ygr;Gwpn]֧c}OQPnF徝7z$E:*vk_.k=(gX!ͥqXj w(_xqyЍü|cqMK..g@_ ؖERe*p][psV#6p,Ƿ0@Rxɻp7:*-znK%`T.*? 0.a3d }p4cCc 8!T; pl4 o@bu#R96jCE1eWg@'Y?Nz:=&IJs?^nYh6U3ޯmH3y]x݁!1 wm§7T|b* ƉZe[nB݈{i<# XN1cV [3/ %`lFfHo84.*+WICWsu0qHBWj9WTc\1U \RW@ `U$CQWZ*RQ]=BuE[`%-pF`_h}I|J,˓-dv۔ LVFo%(1wϟ ?=1~|۞d4d)(FRRJوJRFZq>al|ۓ?߾MLܶ\}Kihzk ٜBR oF(Ւ4f.e?=.IDkveV~<yӕ..'rq<HX?sި޼N&/=\8O@2>|<qf:>.z[cR;ĬW逰A3 Z{1W(5X@UvޟD̋sr/EõHOΎ!<>Ja*ӷE/A``l9{y!bgCuL\rGd7ANzr8Wspi؜h Uixx5#f[#Lh1d@bTVu ]2MK2y%g y<[6f==9iʏA50l\ ?5_εm-S/^dlLۓfz񨣡T $9||oș "OFDr@Ws`lc6:2Bt\0=-=r2V4>Ij.^Qv29#YuGS -ÌSTH儽Gp_v@T+ RU;Fx xЍ 0EN93X~p2}P[(C1Ҷ -7ͺ@kz䘿kXz7#+ {c1U1ĵcl꘏)W6$)݄;9&r6s+ u(Xz;X+\AB"ZY@SKŢZ(sP[b^vOLQ&Ynj]ZO㹁˫oaa;&$Ae厠υq`*vv{'/a0Hz܊-r-Dsl3+-AR'̝b)j =uA܂# =-#DEa&bTS.h`BjAwQ@ U^Fc@/X(hT"02" L ` Xyޖ2@c9w#\k\<,Ĺo b X&vSgs(~Ѝ.eu4T\ټ/.~WeWPɝJ{pXs( I+Ibt#J ,(:`1/F1 62( P"P",%! P3%3r8+xXLQ.w+_e?XS,)&ySU=5B4u{MÍPxvOBh+>Er*G}ܲR[v~]ݝ/yv}VW/=>:Ϟngwt]VټWMw-ǻ66/黟8zW}s}WQx;Y [\~nZxBHURv:!+#詿SOS Ԇi_TZٽ{hE=zV'>NMfZt|zjۇ< ?9f[/f5oiwt̯qԏ&C&K.;os>aRCwxM}=ϓ>.q^፯o TmT }3jlIbt 缔NMW G;~iS?/.i&wK"^[/}q%-v&ul^#4aK's=wGX֍OIW 5Gh"-٬lNq3n?۳ 5YaNygœuʚK_h1FYNOO<$7꒹")hD$!pb%xCFTb' `ށ> Gt04Y2)6C jA`O-VOfuyŘ˩8b&H$ -(mg 5^XN~uv3ݸ-\Ծ黓H_/!/JbX VQ ( O2d,mII;4R!M,ӊodQxWY!BRAN5- g:%]Mo<|_,v<m_#n[ltV/zϗwT'ێZ⯅U-A"("tJhk *T(lΪlDҎrvvBY_} !Z K;t0# $6I,j3;#.( Ɋ75.UB3 ƃ 3xBւP7fig]0ՖE^1d1 mJ-$clZDj &n=p_ˆ/XR9)&TɡQD~**rRcNEY&6SHQx5O5aG$~qU\̌1DJ RʐX8% A8L'DЍ#4<f{T!Jj|K/c98jGH1_bZQ\$W fuIf :#tJ耢Ĭq,,ms*=nJt_!a<|e/$cI6?-30LSFޯVYip2ǶSZqB6xϔ2 U .4.@5zh*!Z%vRɧz8qu]j[E{]Wra}},fY7҂IӴO¶8:M IymAxeSLGAVO&\|Ќ{\8W#P¹1 ',3 Y]ՁhWySN;B~{lQ^k[ F"Ҙ,U)!Ķ5 7^8o&ΞVR9t]K>r)mDc $ dF/9 @VR*Wϥdϲڸ [@m񚿷~^2fH@io 6(ƆL=Nkfii3tmP!\vy{FtyV)f:h^q;{O`HW_ZayhWeWf9(sNA5 .ΠG"{"+ Y9 "냮ydAuRb LqDUc%+œ(…٣eqZ$x-B+Q '96luzwLg,T10Q.J 7y]?9_C.R4x & "K%Ȝ/]j )2Bym˾LzARR)8XlZRV', EjjL=[=P>imӓ ^t͓ x@KQ'e+dMPXA:  ܵŔoP%]_mj% L^2ţ=H lӶZ;pO^nUUW.t4{ƺWtC6& "@FIOJEV2T?{Ǎ[`[!X:$,X!4h$a+<G3(zt4٬Wb}XLmod/4d%y9c&X+itB*7EFؑ %Sk h"NS:HhYQ'$Re9dE2ʪlP/NfZ?Y QŐ>Fr7; =Ǜz%3R~OwTH/ *E?|C7xVjx Gb;^&Q Gľ4wR윰k2 nznAѨ*oƀl~9caٹpAv`  @=~ċIHE^ޯB{{Y]C1Io?8iPv1_¨l4=b?P/3oߜ}xs%sa×a>:<0[/Jh0 (;|0{[7uꮛp89=H'h8wo}]~o{υo߽f?QaIag^NZo+,9QR*v9h#%3\ɾADV׳~&ɡӯ{0}QY5;;ϜG `|>Y.n7ݨy ݝycsQONGK u:ViVk6a8Ҍp.'~6~gFFYY' t>@J"~)Պ],.<tSޛA49"!H8OLEpyd<,#Z%HH*eg-u!iI]\"."h!zKA?D%Ujhgt]ɨ`pɽKɻpy+NA͚~;::\lmrޖ &k$'n1]q k#YwN|K}pVJUBx2a,:"ilYiSZY=BQd5q>sŔX-o.^,ĠtT pAڢ mk./}х} \MZ,ݬ)7?\ @݋*,Z:96t!tdf=~/gvX OuT90 +mruI%9}}`pkD BXb9ȬS2=lsY4KB<H,{(HiE40zi}T',&A)82ԫ&xUYI'tKp7 GeR4F?V!H+"\YS*-u7Z^c6߆ MSHSXP%x @VĤӻ79̓EJk%WUf$dA[w9 Et–d.!4|zm^WH?}J)W֫mu,IexB>h@z%0 1"9,Y^tlt'[GۀRʙ]ɳA-p%H׶Xz.<}:X!s@;njE)Bg8 rͮGtNF*P ǯ< 4K)@Hk[ݙlhjU T<+;7Η8J3˜b LR<#P,$2 `x@NX "T6Uw޾%wd4K 6;\Cגlȿq^164b<:™gӧMM ;R9)h*8::r:ʝJwQGe e]st-8ֳ}S瑞ƛTrN51  t΄<+wVfhol=⾂=73OIOU2S1m33|zzF_VB>E\l BiEJZAX8夿9@Z z.d_'%wϯH=9ɡV;)Xcn׍1_7|ݘucn׍1_7|B˥s1_7zqo,1>7mc|nύ1>7H1>7sc|nύ1>Gjύs#|nύ>7痚#\#WcPmjcPm A16՗$a*)EgQIs@ .BRVIVR.Tw aH`?J8 j >Hi {s!jsP94VS[Ou#T*Vcߔ, yx.mN%\v֜ c[t,r1U$*!IX:]7YeN%wGh鿰2koT& dk+a.+bw]2>>ۆ Qa=qoo#î/7j@ x؍#T C$ ڲNq rlHܣB'"^Yca* n%Zźs-ܬ WXw,Vm[(0u:ǫuϯՇϿjc.FYJ^ٌPߔ(8K'2%CZd8^lIp~w2זlȅx.f 0AKiq?+x2Y RYu!#y2/1p.ٶOnOd8W*ZD(y -D%n 2)!\c jRۈF7:!*.#J!x['< ^GcIʁj<doQOf%ˉ;5=fLzI)e#pe'DeD+b.h$R mu6UUTTzQ*pC}B5//O")̈SB(̥ʊZMJQwJOx8 b[T'+nJjlYݺd_&oy<;ɳ>&^)y{Tb5lh/)x|sT(* f1y0)iN[,:^n:"H `1-j GrtIA\A򅜶}}Y`sB&#U2V~XTFJl }ksʳdd_q:;/]8:}>:}[A"ߒ\Jʆ`)&y Ax ( Ec[W7[LN0g/`2F!E1&Q`ɷc炝79!jgk~GCbnծ6:Vڼ՞߯Y`D^ ri**=x[F '.8gC2PfmiC&dyф,@@! rI_+jo{~S XjcU(*[D,bwxH9$ <(-п2ڃ1&DN<} L"An.Kx3}gcRؙ19 ]JI ee\%AdeXM-H:]wSڦXgQ]Tlv;4R:_"U hwP&+Z2M&,+ ":jcU{+C=\ ^| 9AT=[ wk]I{|@\~PҺُTM<^Liݣ'ہS)CTJA@Jŭ%z\v+ w[ȷ"ߑ{"$1pTBjR*CFM9D҅P7s+KEѸN>qdNMCTP+s&(re`F8س2۔Ը"k&QV(},B٣ kL~Qj @_0ef Uǘ5r=嗜RKWb-,ߡp\ΓuW Dn4j yb@hKR,TD Bxb֫ sɬS28:ΚYn6!(HiE40zi}T',&A).Dzʸ8HRrɜŖr"ۗ7 GeR4FkV!'-2\\se-u7i̊l0K HӔ- 20 gI%woȗwXp9(v)N>ۀ߃9`P n(ȥ|mg3ӹ{@]9M}>8QGڲF· TLw]kmH ?Ϫ/ 81|EKҭDHIU )DiǰLsV%3T"Ҥ&.2ljtXں́P8I! PPCUf $CB xپ fkC [?YmlLq}m~s O㗳\mex#Wb-W5\tBHhэrpj|pIL8J(/63aSr*hG/ DyTZBuԨqQKm*Cb#GQ2/JTEn Kb2<"eqBPRh3]AA1濩&fpD6q:3ڛGϺЉfIzK1aa;4>{nL3SϷf RJ 6Lpz#iw*p26[Ey{tiڠaE &fc\ {#ʹ 7կ+1i;sRcUz4'bPն5G>AٜM5^ydf~MüSevFX1hw*пyIQΦ U֖(#:wb"ΙB,ǸV9kd~8S֪7j[pinfpq]"NcT|N h/i-Ӿ_g C6Jpހs~Ei&Xp@t"#_R\t~{]H%Ap3:-uLaP*uZ";cT~aq:73׏~mdFbn==|r@` z;~oWCm-0(}*hVus2BY5 `7B<cp!g5XX2osa0 QJUsyeJ;p*10atAJMQ:!ţJGO)i^ɇͪz:9*[1UlX>SdDto:V0藘27Ektb c=hqoX1N_ziv'5r0$vH;dI:~e1؊{h_'=r}O׵޶<4Tt|E[zߕ<å^2 ۭ:?8R1}8c7\^cwrgNl}aI^_\KFW%|_1'n ׆ko8GF{'ν臻Bَws?mR2iL$ߜh}{mM-v;ݱ*ű$X9Ί;ޱ}F>lnVZuj6m6{8\ȺoczyIԷ<7p..Fmv_xY7LlG]SF󠽿a{*K!}0G9 փ纬ðP~\`Rr`[@6f+1!ldQ0Re|OA 7Ti"7`xv_mD|l5_%dӂ*XYH2GdѠd| *lvf L` 1ze|ԴvO~+CNjW7&ٖ75ىtv~iu+½Ujֳ7 ^IÞmZ*~UkO4)`,:k*Jx-eLwŮ@z[>`{;X )#e&HY܅PhHB6NR1<U j#X@((Q%$5P7kkL'cN#8NOFr%JD B ݣޚVMx|0Qkc6Pl`9o2V"(}9Asi Sɛļ-qB 7R[-=̀;R@JFUr(h*.!#ksȉ[0^hDM8F ,S2Vk$C'+Y5qԳV~{j6]3PK 3$mۤqr/ p ZG"#%,P&P'MHeUT \ON GY!KӤ)ieR(zǀ;,Fg29r %:uXwVgȱo!j_LYb+vt9y!cn 緸ȎoܸfȽp oE{͕58F[Bm6 Sݧ96}$\0z9fit+Ҩn칲7xnˮ;Wn\Jx~奖C V_o~v3j?ilA: lюzͦМ?鶛VsUm#[?v?4 suhCm+a\ңJ{܇ )Yn~ä̽H~|s%>?k;>s[sGr boPuƮ_Oֆ$B AVݧr5#pLdu7p=צ>7[s_ywf3࿳_9۴7{tCvP lG{ja"Z4ZkPk!=T>0ぴ "\ V!Wt-j5,WCs'\!XQ"\Ր˫)Zg Wǃ+QFV!Xe5r ^6qWjɮWv=r٭WCq5T:=q4<<|\ ~=kWCnXZ̮jW##C q5GV_:J '\!ck*!i\ \#Cm\At:2xΑ]Ru5\kVS _:^Oꏃ+z+ʧq5KpCʁ=ԉ Tz^h脫g}Hp{q=yl&Ed#$?Qw_mu%ջ6umewiM}CӋZ G^O5<0_1ś! Ǘ6w! EzzFþ]ü 6%%1曳.޿o78/޻Yi4`Z l|$ƧBqZJt#^Ayq gLV"0sHnSkbL+_6'}鑝|A~ʣ_0јF4/g(ko4Ѧ7YsĠEl- nG4Oܕ+{s#r&Etfn+cZ7jשg;0 ׃iԒ[<ɜ0}fcpjy5rYׂPkvT> Wǃ+1J_jqWCnXZkﮆJ{rWLj+5)WCp0Vjpbd*ă Q\ڕ\ kYqᅴͻYu:lQrA}U _ggt<\fϋ}ik( q }wpO!<Y0t6UyQ^ d|&Yu9d%P8K&j3űɍ2;*N \Ă?Ǟ\OϷઽ0C7j$W׭ICO6hzs񮶟>ʕlOT]Bś>R22N"3BUcPg ׳j攅BcSc.TMͪUrTa%Ts0٤H%~^hή*uS&bmXGxA;M6ugRiI!vR-!So-(V1Jk`Didь}'R)rMEB`bwRe"ڮ I9IiOgm$cĜDh1=XzLBuɌa:B4cv-.o8{ptBRb/9%|n D4!%wZh2U昭1+y=qϰXF4ي06Yʹc@&?"rih*T{}f` Pэe/H`DLhM?_7R![^y:fDy%cNBk㙙@/Ss.>h̪B޴ٚJjݫ8gRIIT1ڦs ;H9YC%x}zs_-ŰOHѻ);RcIE-)$qu$~bs€bҔڈRO!x,R!HڬFOaj4U&b>#k^,\ 1է@}(t)Zzk,d0sSb3ȸT'Xt^{ ub =5B }m.5SGގ L&yˌkyJlPqQ@}[ ޸k.*68:jkaxv/ux;mq2Nk8QI[U<ĪJ,2h 3SXBs)[˺W Dž̆XW/WgiB`TR {bz$ƬC6l)C.D@ %ٻV$V*N!'[ʔX|.ߧ؎~5hZpCZD* )P&6  ʄW'ixXPܧX < PAIJ9iZT2\3r5&$I2~=X`"+aLUaPwVrEP1u2jQ>2 ("25{4v!ץ;-!n3֜`A'b΂#n ݄cBl00H b4đPI!0TTD3-VQr#3q_`J0 PP;G\`#)#MEUPמ,H(`;:?@!N% u[]T RfD5ڽWSRE>R]M ƘyثM`!ѿ%9N"T&-"pPBݧ[,E@I 2vomy V&КeM623 r=/z1#.M5quHN 7*D͛2 !0!bEM焩wlx]gMpsۮ7cYs_KZp :Z01r/ۺ $>MW$ >:%:ppiƋFtd)paJrZe1%OpHv9OUK*%tF\8'(Z V<@E&rZd^[0P>x.N1/zOCH}l,knG-8>'U߫jp4%wjX lZ.kE$Sbm1~v?>߯./Y߿wy4ԇ\(T6c%V,XBX;K)Gnk"!%:]%/sI`홻v51Eg"f7cXd+$`fj2ŌJL T (o*"qj+jV-}F,*jBM2(3!βnBPmˉ+KQ5{-n;nߕ:tg&QЀʬa+JTjN:UQExba- mp$h1/^7L#KH03 imUGjhM.Ov+;DM ~.7ô7WڎnT&IkA0u]-'Gã'åE:<$L)dJ<'jmT8֚B$KBq)'ޮ _&hhi6#dc0pPaRDluX֚b樇 |yB1C[usEyY\z@+()  3z<`N;XoYo:h슂 nC+Eq$Qv57Y wP._j`Q IyvE5FHDQGb$abtUݧXk~z,amL 18ƀdw37vjkփ+LNm3iXX*6 9O5ig{^ vKixӪ @HWUl6k`(|8Pg@ X8.dO3<%tAQ%#p"5(=9Qa,i(6tqHn-]3 pQi8-k\sm_]좻ȇɗ1"KIv9Ųcٖ,[dt1FLQ[NSUVr(:eb12 6+!3"f).$( ^cLuY~kuEB Xp3{:jHD7z|<pxga%STCUAuY^}k߇Jp*P*7xyW7?ZCy]ob~op_J<?E:YK5e4̾*/~荗Y^{;JgŴzʅ=a{OôΦxp~2ֹ~f~Ilkm+UMYG^xW $~'5sP Wݩ ɡIwJΐ@(W@֪@x$T9""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ":^HtRs*ֿ UzI$1@Z߱G$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@?/ K$V"@֝!@'@5D# doD D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D t$DwFt;N^ t$ Z D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@?kLcb\H(w*%Oy4͞x跦K;)Lg%r%L WtFCv!h]\ٙ7ZnWZ#+|מ+8]\)bWcݮP#,2]`B^uŮ@- JE1ڕÿpw(vg2]ZnWRy#+\i]Zjo4BtŮPodQIvؕzdի8{VdGW۩p;ˮvȮvz΅Cv]m'vjo]ʶEWdW/dWf}b- cZ&C.r(~׵&%n^t&*zL3O\dF܂uȮP+{ZvBmK zRܿlS {E'_C]]i!+cW(wfN>3v*":B2BN ݱ+dW jnWK#++|w&>ծ@YjG3kW X#+'<3]%KRuƮP]ZcnW_Ѯ<&UtȮ@7C3鱨V؟Ȯ#^0iyܮ,١vCep;JˮvɮvzZ M>,k7 _6x1ƭ,GQ>5J Ko< z7!4冗e6iXרՓJ/<%XJσi4@8鬤j(J[QM.}rAeY&0jޚ+RuRWgi~G#POqT%Jum׽ذs[.OwQo~.]eⶮceC(tbONZCչCun/JjbjQD$˕ZU9t2'Tq7Y2f/ғZ,bbw8VMVE;MU]|uЂqgs%-i%ҍY|5`+|u% %v7#'D =B;0YU܈DŖաFά,\ѝT+MWPp+ Bqm [B/{+Pk]JɮЮ$̪]`wklW zv+#+yy fw'BRuŮ>av*׶!:H٫]`dg BY1QtuveKI| ؽ+\W >{B UJAvuveK/ˮ+vjM'R1ڕN.-`]\ۙ+T[oWrB$ѮZti2}w֮@NUv* ]@M?͑An~%/UU*T(d*ɑԜ5_J6(4]]m`Wט,`mcqh: Oز)r훯^F\X|g4W4sWy_z[;olu##*$(iJcU)oh"I_bv.A!3F$烅Y^XؤUPw6A7oWp/Nq'P~~z|gq3'U0tqsҶmQ?Zm? xA/|7Nr2)[,0jϽ'1pq2 fNV㵯W*K{o~BgL {?VSpp|9)^h6_ :'FU3_?ԃw_-6Vf0@ ffNN_d\%;ns`p.l_,{0jr!0is--\{Aq9d>PiUӬ* (UW>*bŅ*J.6Ǵ~sh~29sYtuG,޻ȥGuݽ؅7ӂ.7zKvʡ*'3B4\1b5tmu!5HbP:eCq3_*Jl2'jgב\D.y5dor(oRF NlehCa8?~LC0;\)tZҤ ȜMy~\lwcrsTBJ0 *i5_<7?4;)*G.,F ʜw_Ǔ_zj*>=D,sfS,ab%QC."d^ =` M߳y·IQy,4i|(Ĭdb&R.UC1\ a 7or6e#G ՒD(,R Zs`*[#J2dII^ ^;m(Œ2xu/cDWs^K\ lA&gV^=WKpSX`<0u&a-T<JkdVQùN^ziw:st?'?`X6rIt28\mx gVI9m .aLA) aKSI9JپnL+YEȍ(vW>}X KMӆ1ß3Z}3|d<}ӎq4/'-OjFbngyE,WjU%D 4KTFbj#߾VԪ_I#7$1Z2Y;guor^BY_5ƭEZixzNSxg/ZH@{-=RsGw<x]Ys9+yݘq=zc&bٗq8@BҘ"$%[}CC(rX2,T/DfsNF5h!xL&BDȍ7&7S ?uXFX-K`/XlKR!Fm2X_jp3OZj#QtI! p 1ʁD&ctQH:+̲'MK%LJ 8%FޖWf3c 67foݩ?dZ15p9;Y;T7}shw>_ XIJi h+ÉwS^X,:h^Gp3H. BhfBʑAm7`Ǔx7`Zz W:SN,_ȳTFCqOF_/VsdQ ZFE A1!8*{s=s稤.-A଴{YfcT<@Ȥ]I,T]i,sq2!xrYJf.s␉J:DCsc[Guv!*tք .wo$x|jWܺGÇ^OgxawNh,y1z1 b֒7ŵ*tb{tBw]zH p͛?Ժͼi8GldgahRˆZ6 mݿm{ƻ;_d;أ絖8[^ݾss 8YD"cκG B e*zrx̥ xuh 2Ht A6M &xIG!.!g>n^+jJL><>)m(+TZ^O,QY\|)c!+nYźO46 ρ0K:3NSO>6BODOۛT՗Ң/r3Ve 3~|iA<9+XR@/[p5 m4͍߾M]ȒO}uwR"'8k'/6 bA<Ԗ᷅4Ǔ9amkھK<74Rx6z{K~]3g//? z<8"mw$m1܊GLi"տ yry[2ʳ{s'z6 +zL|m`1$?/L;Oı}Z#I0-%j>P]qD/iwt<8[F#颻gy _ɸzeomʅ >wx rY8-F7X1ρ~L[X'$; <l:X|Sy.`,}h1_6?>1i/$;W `oII%2u c\uS,GӥlMQZ4nxIo%sCĨXfmf(Rv![%=ѕ`_kd>Rl STi&;nW+ːNEm0euN X `HQGg.[`d$*W40擦]Jjq`9]SRV䅝ǪW| ׻ԟ~3Fj/e0ei+qsE. rmz,g9ehk=})SbKriw-v|2):kV68!Z`L)[aSI Tv \[r;v`W]xmLqOIs^j5;FR8)2rJ]DV zFxEN3&dee;&Ξvկo̖[N[ x[Yj8M/IxcHg`д( V1ۛ@M'3!>#B`D2DG) ->lSDJ SQ{SjT+ΆI+vDB)1pаb821V(% 3XcN B`=pTNE"TӉ#dHiNw3h*~%]BT1B hh֙[Jr0ѹdȓAdݯeV\rj Fz㫦"l|{GVI9iBuAAY )R-TQ ?Mܢ }ϟFhW$q :l[d9xp8|aEVFmz`myՖ6=z,V*3CɰAJMK\ j3 RX)=C949ՠ -Y1l԰."JV!(=`bY}VdR: y9ҏ2k&EDBBfCڒ(hڔ4},3y%3䣧/D1U)9I+͈%HfҖ5܀ȡGGG[{|JmSܩ[jwa\^n%h;/?ֵտBSu5,9XN&ɻ2@s82[L\0/dQrT*>DN*TCFWנKܖ9F]raʼ\=E`+_M"%WOˮ!)ݼCq٣i ͕mvL}C˜lYhʪKb]t~=Otk݅$];P|JŨւx3 -DA˲ h'9a 3HCTG-}7Y8+GRP\!ϕB.2KeN?ŮWM=NBvҋ MHz-,d@O#*lH\Q$2![bELIX:X[$6*f`nZ*!#@Fb$q\lTIq y8)*`rݩ=ؕOZ8b| JvM9xچ.&.U{C_rUT}ѺYLp> 4YG-,Jp!I `!*E{lN)r4$R\ BN۔} 0XL k@62Vg72*հfDXe,=Hߪ0 IBUOjNq6nyt[H%ɠ3cDS6>km!K@I-/C: AL (I>n4*fp'MHb]ZF053%\jiǡkոz~/ 6E䊕*~dΪKo r9 Qʈ^뤵LU0kC"!WEW5 Y4#a!eM9d8u 0vǡ(*#Gĭ8sI?xAZEmcJZ!8! NQnnKx3d}gc$sLH!: ]I e%X 2"Vg7"~;Ezԁpq,MݥjZr(.ʸ({\qqƗ$R:_VwP"3$b6Lc2*/CŮa58ue<ԇ l!utWoåkQ{/Զzc;tqr6%N4ntuZ!l;&9e<4jP A};^d^߹xGޡxGމxGrB2Q Bc2%5)ɔSMs%q$ { ~.u,NΆ1%Ǿ6G4]6;F{~.5ʼn(;Ԋma֑Xb4 V:6[M!+P/ue=t}(^[y9ELqyv6?Vz~t[1SGoG_ykCDnsK:0S1xxua>xn=~],NR̛)=sN9@ ˪P:8Ũ`~Jgs 8։\  dul,-B߆J)č^/ mbV5oXV,>#bϓZGκ'G>d =>>;bvE/맛M@W[X xKlR 5.*-l{xc 9MkڅU95ENH75 CHC5ueM[8]Te`8_oӫ%s}731e{m=݃Qp[w]ԛokno}ׁƀ=;Vr+e cJU/ E}cn`fA|ڋ3N/A,ۇUgamjƃ:X` e(KSxۖe[ڊb6X[TMl+I=)}h;lH_V"{AMQu.eʕE,+luc=JFUzpvX4mmcu-r+zhB`\Yi^ePjt&TypvRH U` k9(ouMgm:袸\\N Z_W&Ƅ\&*!jW6 c:t#4 {?v giyOlͺӪVrۨUe5,*lVS6Twu#@`07AӼʭM;=GZ4p_g=TM]5EڎUMh,xB|#Fz"nzm)ڌv=E4".h蒔[u4%ڂTi\po [LYT7muyN݁?O\N^܌tytS.}t[ˣC t_,?3>1yħ9ɻb^SMo&8|vfW=)NNUd찙{4KY6n#ux@Fifj6xkfrٜfK٬9FQETS oWR . nI}-_2>oϋnIAWnt}ir^tWןYv5ڸ,Ь(: w[_s{6Q~=S'yh솨O~|w盁}jx2oOO޾[][vcpd2_cN6Rdp|LgNchnOt*S|1?s\~ppF}Bn;붲:6κf}_ղ, kHd5׽KMǓ5?޼xO/?/^&yo^=qDW಩864{#s \{_v=5=POmN[{qޮ3{R0YNGN!Ѽ GKKAx}w^ TܝY/FHqgH8EQu˺n*TEQ`ֶ C]A7e]R*е}lX+o=C,be)nn|S rC,uQHUܱ nUӴ wNg\^7 ᰛc]:&y) Ky>Ve;,ۉ*(ϊ}ϊqϊh7|VLi\g5|V- HW q#HbL]Wڐjxt#1t^NP b2mt)1GWcU+HW 'b\i;bJTYW#UDNRt(GWbtŴN+ՒtE +5bt[*謫oFWaǪ3х[]]]w>Y\|4eZ7 8WU/ͽWgpk>]gg>r IvX^MqV+6ӏ5j^gwUnzyq~(׋{ !C@otזC5[E9wXtwszp=~}~7m3[zUSkm9u)u;]x^535ŏ7Y5j:iarJccKO@\6_o]gVF(k+8}[ojNaMqA|hN]pZB1eh~ϙs-?sf[xpSXUYR6fꬪMM WB8j^k0]~G0]~$ءG;.C^6m8ς=wuS^wnr\Z<ʹj3U0xLLGVˍф(Hl54=ڏv5=BMIQ '>:(EWD RSjȺ:m+1` !u]1ejDW/ZA"`m]1.h)bZRS1Q+D%i슁+1cWLtbʨF+HA"@׀]1-$?#ʔ9%EWQbtŸrtŴ=Ҷ21  tEɉWNgiM]WLPcǙ26" ^)U?ZOl*U̺zhk.JxlٙWNk[n[ۣ+ii himL]Lu5m 銀btŸFv(˺5FkAbqxЏn,:ʺ`$銁C+(EWuŔ.ٍQW`Q+]Ub+5.u]1Ŭ;N z1cWL>u]YWcUpA+(EWLGWy}u]/i슀3vŸIBH]WLgG+E 30z1"`,eZH~!Sf]}Cg lCGWp|-dG¬W}DcFEWIWD d3t%!SW%ʹD"Ur;19zlP9T9vFk0-L:vpx,QGֶ7P{ z4@謘2> }Ҵtꊀ btŸI:L]WLcue N +*JZe]QWλ 3Hqlp+;L<u>8tQb+*jAEm銁+EimSBjB ]r+ƍb-])˺b'*b`+ƍbډT3Lilu! i!C0btŸ(FWDN]WL YWߌڭꝚ8|>Qõ0:;vD=)1툜]WcRZcXƘju|RuOUc6zj>iiƍZ I،hhD ҕA+^6&U2%Bue#: tCg< g`e){ln+lv&1Mb &:0Hϫڣޣ6 -0R6BC Lڲ<$C Qtq+uEA )bhp Z'n2U|YWO+8 ] Fw7P{ڡ*jHZp͓ +51u]1bF?`]Aة11M~?9=VsHmAEV3ջ'WqQ{qڜQA(nYݗ{kwGmJg1t퐮)NOUqYm!_:*]nI¯;]>4e_3,^ ImdI-U+@ۢ̈'Tڿ]v@蝍C;RN%w>O|oDB<췿88o-W)}<ķu_}p7v=r4xR_fFU=~Z '4G|AyzcZ&i~Qcn?<5Jyb^"B@s9>K_6\Aڶum_mGzր=^d؝ؒCIilMHΪtN݅]~|o{ϟwH#C E{}Eqy꛴/uka`6>]LO(uw5R =U>XGeI%(ҕc`g-׽?)ͪטB Tja>\Mt7VMݫR`tlbշ'~=uJ[W"D?8U>kV Ɯb&Bje9RkPTA*VZQ5m0}׶WzNѦl$ZLm1Zk ͻw!bwRwVZ+Otn?CIqiJM#re THf,c7iD34f"bz4*KNI9_=-4~YܘĐuhQJ=@ܽcd,TD2; 3[ xB. bjO(,Q!(}tB[f}7tO~.޶T/hSb8A1#3X2.Lf|}!sysYUW-wrZ<JIIbU1HGQp۬#5wnE;If)((E?aAۄE)%yNV3ëƨjC_"$\mxK"FUi,I;mgŖbSsHFu)[_c cfC)$X ufݠTGhOM6TuGݎ`FڨQ/3~ E US`N)K>(<ݫ N7h NAk!O4u/uh;g۴l(g/ 6@R*C!VeWTHcluXBՕ)HĽd 1YZWOWgjjP]* 0 VGQ7fa=g5, ȡ m "׎]SPP|MA'SXl<5p\ jX&EfC6I6p.`AV4:(576CidLBXCOe$X67NB&Ȉ`PǠA]jr1h!/fs7DRD :8l;̼V `*2"<ѳV %v%)5cEXqj- ) _3X-DU3rB`X-;L/ h2Б5;حjc:LMUQDiv%&HtyP*Zho:"rl*zV,:fa!dURh3A78ˤB A3o'J Ckဿ@z!uf1mh|M6] ,|˴y;rIa S!ݸiFOʖ=†`ӿwc'U2b1jnT̸֚BԦrwoު7al *'+EiMkׄ>H&9B~Fɟ$oQ^ bp cF[TcD1ьB;$OW=us[`rOuHMuc@sZ@[-ܬhZ+k֪UVg(iI:3BBv:Q=z&D3Nq+3jlj!@֌N (|5jUàef@ 2q]HfhJ7a#]2'Y{OY (tcJ S\d@br-Pq\,*wڟ. q@D!C1+PfEoBG!6MSZ,;wQՒ-* <'qu"BQ ΃rt)eoH0H5[F^ܼ.^n͍pmk蓼\k,i r~r黟~yHPў֠Nk q4y1X}!V?_J vM).ANVM%jfvu9W?Ok;ߝ><dFqc{HovۓW/jǶ͇ZmǑvg.Nw9Y}bOߺ >nYV)Ƹn-3c.kߴ?1H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 |@_mZ[{ՋV1v}saC׿Kku{~U6w0+zI%H1..֫'o\(qWK9 5E-ZOJCBWϐЉDWr pRKħNW@Il!]q1jIw#XsCWq@yb ]-;^2NWw#~\aStu7扝; ]wKr?{GXvX}7By=6#fOVv%drÕ6R|Dտ)ob|klaqЉ'_zcw_>}qGO@U 1a14=Zh~4=P#4 iDGN/rT倫i)t5'*JrBWϐl vQMZ ] un(z>t5]] j7.kj4ZҕS.ӣ_\KNWK ϑb‚jvn1t5\hÓo;zt԰U.`pbNUP3K+&^Ω7,T;вzj8t߹RqG;fg}fn(;ww+ҏ }>Na~Wt^>oKy}{*Vs*Ë/7I=M7iپzizqqmZ.ޟDžnMy`oH>!OߵƳ |;?uن2Hj{t/U?9k^,C",gHHղ/]]WG6P*q*k'`ˆ}/?7mw*JM*AuK.P 3 bg@ ;bg);.bSeguv;v@A>[dͿQ'Y,3iW=20d_!LsjP`Cɳ,.#\:\e)ŞU~p%1B?#[l ŵ<uf],%{ Jo94z4yIMgY-f- 2?G7=ċ8;ڡ~h. ϯ~{7Z]GMst_ ȯ.#ShpEM\GD$4 J4*/ƧT_b[>پ胚{fr~LFhpg1ivAh -[߹nCg?Gh.K͘ځ]XslGW>)KvAR꽃U2fx'iIY݈5hs`->-=N9c7uu>rcQ+%DH*pu"3;R] cLPIgi%,5DVZ˜/f$6K^_Fp)6a֋oSY _Obo'orgѬ ?@7 n} z7]4~PӉl`hU\nJj~#v wSnJY)ch͝7uYշSkK a59:<4q4YcTI(T[`Eֲ2H$*%H"{9$^tbqiX.ԡ<6]}v~bBi,'O2vuuZNnG7Z$\~!YPFc ^-}|XSHN?2R[}Kؤ n'2& Tj% c q&FLH\D"yM$Vm)!qdN6O<՞J1p.F\Le]jƜ)Pq7"7E={֦یͧ3ԲeҶnG oygQ)t;wW\?tzkeۈ{}+s߲]BcQc;X3=|W]}?VM~*b9qtQ9D*i@kJ7m)ߣzq,!R}өV:3`)OIsFW='^ihF\. =MeD.46 3]a.AE`j. =[9<o8q^#jV!tYFrɴ>bId8t0iuP#n+TN)l8nnoG'U˓ǓޜPfN͏/q }"X`~}V? nT/V,U5ַA՜E]/m}zoz!6 ~rBݻ<`1'ZV&zL2\ ._A aQD-ץҺ-YKb !GA=RUPfqe#Zֻ qeV*w&Y|>g,yQT)%HR&JF4 Fta yUm=b ,@m5_x9dGQ"y %QP\5G88K)1 /ztf08} tJleyI0}Ey2Ky ޥ<>:/YCvf̵َ_sߣ3g37{M֬RtوԽFdۯEmp~:ףBVڷ#fWmJ;]wvGJP59n %u4qZ0_FEL1Vz(9o|/UA^2W;)t6ʇ(a(S\ ?& շkw}]?ƓOFe_V]P=%\ίB}v7OnżʮFӲkWկYk\hk?1-Et7\!Ȭr(i5G(tqbhUS*SCɉ8e{J8 / >-[쑖Xo%K) 'r@IƁjdbNrX~DGuE#rkeKuI뤭M$BFcBbk |_o]5>7XEB턛g#4uKc ;IӟI2%8Z=gѢc,4lk"KV{]4pn%S } {KW3RSrgW*8/&ΆȭCێpf{Țs G;tA}B׹"5 @3fTRXpH-D:5TD[Rl K^< 4h$q#+E!&6bm>gHKaZM/09ܕudu]VRb|zYozQ'Vo{wc5R{NyjtY768XI5dO$p/ XEFet1ytA)C1Y&%4DEwJTid,&Xb"Q"Q("3wĝ]|4oʍFOlGlRJڹ䓕!I+cR6Icplq yeʮ2< s=,&A DFз#Ʀ̝U)B6Wknq츘Ŵc[Ԧ%Q-o{ۣ4~ri;@K"$'m:k9C']<2JcQ`(PTs0!'̈́NS8w#Sja\,%(|{\-scs Yj;zG#9fgY%ـձKOWwe+z4WLN3J#W H*zCܧ.RqI!i{"CRfߤ {K!K!N#wDGG@`b+ "Rb )fUs]ms#7r+,}JR;+(J%wr-;j IYԑk{!EiERAMS3L4l"osBKt43hB.CM ֓4.!01'y-+6ZLklQPfM@M0J+$Z|WvPV@0gRgR,oP 1Q yxAz%oE׆~;im2~Εq| o$Y 媆+c^ -Xs~,q DBDBq*o]%v&?_e3:7l%nрvZ?o%b;yK8qJzJf$]#VZwKN<CG3luctMɥ6rsih6[vuO:EW(s7{ ZY:j}P.~ԝyfQo0QY5fz5Nfn0ӻKO峓+2DB2w'9o4_FWㅨ5v_s|Gob9ZIcb9)1 Kr},.`HTN&󠒑ƁӖŔV$YTpdn6*KSR3K<"e>Y+ Ph۵Ƞ8t]ˑMWD3ht=lq{Wj|!D|M?Tf62gEEϐHÃב;bF+oנi8V\[na75s ޚ1/4v["qtuˡnmtP*G}s>Ui WQ&,t/E::;#{*W)h ou[`|4j0Jhkc&ɤx2NL(gh齳2?)kU[-UXl3^7gSkG SԮ/M>LU~nwi8<\QIꅻG>2^[nf7 y^>BruNEbH/%8͆u2{ A=]6Wz4K38}G/u.1.zhQHҢ6:Gt#_Rt~W1@1kEϬ!)ϭ{ylaP(0rn-1*8qn+YX?8x9[F/?9\&^y{7Cm- jH: +ۺ{$ L6 k R"!ƣ)-N,Bw{g(cت9< m:k4pN$:C R"Jf_3f: ){d7+^K{yBtNVLuA-_[z]!m<*7kx,i)aMdoZ /{S;K p浞fts/&XHK!Pp +J`mVjZ%NF[e -VCG P&jGvga԰#j]/A'I)=}a#0|dR+.QiX2F()DTCLbJ #ooEԶ77Oa|ng=I+ƉicI:R}e8|FYyo~w{$ b4l&S*{E2߇90BF%K9?OGEĠ#0@Jeon҇29?B.nh}'x9J$_f=|Q'n_ǵ}8O 37̓}$6RB?qӲmh/:&p4@?5NpX)u+1Sq|;iv'7r/5Zr/6)7b'gob͓,gtZto۫aq|m`Q$(:Eց^śvK&a0LԢ}0qH_.4{.LFj഻Wc4o2.GB =)óKM Φm^⤘7:*/%1,| ߽{m&XlL| <l2T\xnI edb6SK B̆l wxGdϵT2SnЂ/UUZ W[/ퟞZW ٴ #uN X `HQGg.[`d:Fl_+#0擦 ;]mlܚfԾ髝HnOlW".{[Ϭv={Ý@z>B{VUM ZZq-rlO(#e&x"Ҭ}OzW>nwR'] ⑆l3f8PhJZIN d]I.8Tc,P29Rg a* im[➯D4YK =JQɶ,\_2Ib.}Ak =8'/THX768(R cdJ KuR`f]$C9^si,J0FVgdP"3=)JZ&5z)`+.#ksȝADV zF *#ZgL@zVMYW5%vO_Yj8$YcM(%Yjժ (I tϘ:8gc rtFe> 9 )gY"U}2%e-A.|[gHm|ɍ.3w=zHe5 Rё1LGóilCc4|O+(O7pN6 XhѰLDԓԢӗdrV9~<ӊ'\:>HyaU*C9XrB"&R@ВB`r]&%5Rj&.[lϊ2exW:nkL[Wh#; {BЇ[C 7B k C1yΚ` !7ޘhLA,ݢAIh{жf j+=*%m6V sJm :`Τrϐfd1AD&ctQH:+̲$LJ̅7@pK-q\G~٣mJP^_}؆=lu<?ܤۜ{,Nt=U`rev81yΖ+tjp\xs 8T#E2V rd'Ugߋ^Gi^8{ڴ+b6\W EEdٱ-},q1Kxi'qKz-EY|`+49>xGG~WVev.)j[k筤uVVRzvӧ^ޥ#]*z:[,B`h*:ЮӁ@t> ގV:y-otf9ak:j}.f?μ?$9>ީV -4-RwGV%(ZM&U_ jn5m,gy|O _OUgƄj/O|#{;[CX}\&7gU8^ aڍ?OVE@<e#5xkD8DG:M/Hjv+>MSql~^. MzKXPiU0E$ BhNP#Ktܟ#@hc"1MӹSp@ :g(@,J@mB+[#} ;+es.N)q:|NhR}D(Fuc'y8t'y0_.ɽ h!ۣy YfG68o$c!⟺ۑKC<_rtu(!g<.,4-4nbK]kGڦVpmsE;+jԢ=7 m6c7>+{K7}@Z)gjtOnƍV9?D܏U>Z|N9]^QD+yifB93ڳ h%᩶3 rf2.-|z˗ pɃ0y, :"E:RH5xbj'y`NE=:h/cosѢ[:66JP %&2` ܙyاN 1Q Rpc) mF_fA%$|Ʈ{_rk!3_{PvmmqPZ!zq!ϪFc|~ZfnÐ⊈SO ~Oj6YK(>}FԔF+SN?s >}Ko0Tw՝CuP9Tw՝CuP9Tw՝CuP9Tw՝CuP9Tw՝CuA`.drdj R%1@Zm&[s|] BւQZ1"ckcAC&$eg)?ylrygG8{P>aX$&:g0TIQ J홅Ąpr] Z 4@J })#>CL\0IZ!ޙ<#o`9'490L|0#9FfKd:g՟4ѽQey.*BNhwwَt _KrѧnNOIh(Ϊ]ɳnM<8᧵pznn&oP;59/rs]F|,6 9Ah֡iڊ[ϗQB{|БLٮ5ʕW>fr=}=Öo+t:om'ٜO_(g B9ZJ+f]X[G+" .c9cVDM-g@r 4H x[0v; *t 5uw~pI,?.\ڛvLHM(DsORvB(Z jD՚P;&)WN[ؐRePRQvHEIZqilHYBKH(yAG SS<^ s8(q6>6vQȄQx Mi)r}M;Fs7MU` G Ap Aӕ@FD>rњMiMQ@̅XRPX5!R:(M\XZ{Up(9 oc?! "`# &Sye}.-v޹vLBz:-BKNz *`Ac`E_n2S ^7-rWݗbC ~q 7rsqfn)'rT^̦%wN,8 Qj59I&f>eKRdlY=|2F=YԠ kVФpQ߼3&H|vBcv cwv#mi"u[E՗8.Ya PU?7sه{ߣo_ǟǓ/xi7>{^cBƄ}ںsP9 _. ´[ZeQO ( @*FV|o _dv2.2kQX\P,x#k5W;XMtfP;.VdwE9_s(qNi@625  E Qĭe')q!(\Xύ$!r1 LP/ ψO9TWR1r6;jw,wKk7޵;^1OmCABOy#C78Oyzz7 %`D %1j#e3&$J@k0;s;m['t W 5TnӸP{T0a.`^0U'«8"W֧CƠdQ*dhEs!!fzl5RU;kk3f,j KCYE@ouA 9M)iV :ô ދ2Ȳ@ !s!_\xKO? 9}pm+յluڜ{fg4Nw^j 72GG{7Vl'm?Z97-I ]#FI Zy*T) g=d(3Jv(ie6B9dJ ]:`$`05YS):d\zӞk쌏|KO3t &XYBeQT^ is/97AB}mhH$,7Z7r4S+N+biR`9*UfW:zg˪cdh.`=Y{l ,'x#lQ^OwEI1騴ˎ1)4(ys.1gۥ̝9+W'Qu[3RBɚ43Ŭ4:b*",> FH D: [Cc# uN>"MgI$ I,M]9AsrBY wlf=MW\J4_tuzJ_pVL 7RJ2P`Gݿ-M%]L',/Y~Oc%U$xd`d SIQOKFrG@3#)=FJ..Lx9]z~n3 AV/!2yBt62:^#.EXv)n:~f"gttX4G">A_} >5x^BD+vm _,PIr _;]EM.嫕 /1h8'0)եw_]IQܞyݜlvl0jOWqpzLrG߿o/!:|[qB=0X։ŃE!x>^k4鮚MKд]oo++i j&Vo>;J --fˌ-UZBb!ihUj)U}rѭ$c$م/Hx&΄ZFπ~_f9[2d[J*8G5O;ٰpyuc=ocaDdESςArmBrt ,mRsxm˝٪WRR>3;_G .+U*çYehS *O^b FYY- oj)e9d[㾎r*[}Q(n-Fq)lY gVbu!-Ǎ#4Y%6,hѰH,6 rkϠ2H3/=2/'L.m"g.;@58ߠtAKEUTV&C_K+^.)zi]$=>N ŏ@Jj}Atu| BmِN/w 9!4|H͈!LڲZqk9"(БpsCpscɽovlJ^f6}{<]&tc\Rcݐcۖ+0uKky^u9ǟ~_uɻdPS)S \0/eJB$ R.zOy5hR'0{)G~C a$3LM0?5IðAҬLk1A$ØYʪK$f4}$b7aބ݊ĹRւ"D3 Z(2%nS&rN5FYXm&-ht.;ڂ1eHB\)"-{5'<^Xyg1eoS) [>y>!k΅ekD +=!]F"䲳 K`mQ1Er Y@uvyE"*X!ɍ8U(W\P;vCţIS6`)N0<=ߖu=ס.֡PffؚNZ|bcZ[Tr5Byj:w`p|21i[,6YHc7p!I `!*Eu9((@KYrAc)[eSAP\ ,&ÍεLNkؙ8{Jg3͸/\n.%*6;+x }{}'|u˧=V)RlI!Fer2X2cDS6>km!K@I-#иnWk9)AL (nbhT0*Y1sNʛ\u]J=vJ-^3]zz#5)Jl'-'R 8b퓷Eq(eDuZv[5kCNsE2!C(&d$d.XRДsHڱ?La/Ǹ-3G{D{#X[=u9iy6mc)E j` )DРV"iUOv]+qn>u 8ƵMδ~Que{x#J#%SE#̖Y)%d QexPz -3?C}?,.u4cj_l43ޭՏO{ \U>r館dїET5 Wʪ_kh_V]]$(h˪M% ^6E*Afqy*D65Q[ YV$!DɀZͺ-mYrdu@u0J+l$h_Ij+qL~ Eթ;+Ù B:]K@2tLn7nŹ;{!}%f=mfM6#Xz3W|cNҧ\dK[eY;8БҜ_) qvX: DBDBI*_?atv;g`-y gb@xr0ٸoǤޕ-g|^=bI+۬ VI)W좑0IVacgxwI%N<C;-f@nH@iH MH箳kULtm'/#(gBCAfW>|~]*eC-Vgv6M[O TXz*&mABF]ѵW 5L{icu>>,E=?J 90镲һgH`qWE\m*ZHL^BpHhMU)&;%/t|pQ޻p1>E VcSiLFP I ۥ'<佄cW y7N?şݟݘwZ;!w򆤋ֱJT!f@g JԲ g`6*A)D¹@)S$b(4rMDNg%R0'qd5Pϛ:H XQL?,7}#dox dy#fڇOQwSN:(d% Eꅶe k_wܐΜR?l{Op~쳌OeZ}4 -@@W)Oٻ6rWXy包ũ\Nm*yJ*'y@2CRzS(e=׍Ok C=a` !7Ph90Bx;t[:c&*<>B.jLX! 0g0:2=K,0}0 u"a'Wi>_vD Ern:R3sdZU#goY߁ge \{5vdݕݦX%eZw…`ǵ0{+…Zn~ \cy ĝ/nv]èpEK}GKlovT]zQl&଴=,3̕3<: {BFY?Ʒ ;&*sW~~5T3I YKtnhۛl1 ltK 9=)wvs}>mZ&BM1m˓#%c<}@狨~hkL>fq-KרEQO-d{b1vaO> #u.`B?2YD[,Ys Ik2.TDc!;E8bv;mF T{GȆϟ^ӁY>ECՈL~8d:'} ў||^$S0֊(8 Y*ϭUY. PYh2PwvSR4ΑAR7JXۍگlCX+޷}+D (!$aj:x/Q& 8#FV"}Y$rƣb+(Jo(3pNLЃg?1 ~_K$`%_RV+E(Fɇ[0øz1s6,BH?]'m}%=qsmPmiD@azoSz~#31=.]?L.s&ꕌW>5!M_@;\IY岍K\F+%ltFۼOl.MYJN;i2wџF;>iׇ*Mo4J? E?qU~ vʁkqbP32 8rˉt;+'&˳Y',A8dwm&1xxnO-Vl0# ټSסrϤ/Da>c}4ʭw|y~{9"xPV\2b6S?fC;C=!*.Őb>ؠJdsbvZ'HEmfJ2#0 Qǔ]2leRQUQzc>oZ _0xS~$5پ qu΂~d(l1'ur 9$XdU̗?;-r)(#eLMEGSfˁg+6`3UJx'3(ږ+Ib^șR3<PĕRѐ dUJ潍YO hL%)ϘŠ<'ԧUP`:NiDy/zS5to{c_|ŎA>v>Q73K@RV#lH TIvI =Gɭ&̬$)Ă+v -8#)Ij'c]jrv]+ #'TIkJxᒵ>"k )YgLH9Y+W}b`h dglV'I0yDFB Y$MI-Kj!?w=y&ψ *4^Ȃ(E aRJJ QQ{1BjP+.F;i6;#SFy}RDt*VG"`]BRk|"6**cPG{xȣ=YhOڗC 8_H;Ô:fT@΀H;ܚ$CC\ @GˬI% $5%b(WE>WYֹGVN%vAAX*9RpL;$f<$ţxlئ+t ZZ.yw3hHD鿁7%d0R g/߅ξg}*$!f ,4 mGl0\@d1T "Nȣh!BQ3l4i<$*!R"Q;2 &Q'DK)u Z)}P*K dp> p7LB{K[Z@yFPƧ9rF*wo%JDڀCh-KI*ƪIc,5=Ⱥ:8ǣ.)Ţ)Xc̘2Yjp:Z $9 78 ːŲ, ,3O,':< Jk$dsgGa9S,e_Y;a ɤM.1,j)pQSAR#JL-Pj w_Avp]ol6k{ft̽cNpyubn.˩R$r~}+ td#x:Qoɣ>?^!Aԓ5>UPJ+n~S=Aͧ/1k1zThOPE(8K'.U”so$oA?ҩPlf2h¼ hӧ2aV5+$ Wmy뫛ѷ1p3fS`N6Yhʪ%,e4%s"Kf=3CGev8Gr`YTEZ ڈRF΁N=~vscTICR6 W6{B]NɊ)Rju{Z'Ŭ'nHIg'Y"T !BqHa.VjEx^5si^ N.a5>9uLcmzbVr?ݗ'm>N;S[]{f^ O_MU|@ı+h:jnS̢Lw.tJo-a͈9Q6YrA'- ] lBÍεLJfF~XTӅ8(]H@Uօ~ԅ'ﲷ鏶)ȸbij".bkll˒Eh)gƲ^c6>km!KH [h+*L݌CdcP0MqQ(£d1sNʛ*kja6{&`zc6o5jFҼ@KXiGB R#傌8pB2&5j->iB&dYф,L,#(zfK$a5r֨_w`+q(Ee(F8jĝs)w< -`1Q]io#G+2XK=Ǝ mcLCSEʤԇ7xHGQj`V2 ƨSM`DRIQ͓ ,i&4wkƦď /묭lˋf^/vTCk97=U udsgM} De<<2QL %HMzǶ|(kC(NV<:Ma_oqBԹ5CE? \T4i6=?}!jBe4"BYWE8 Dm6.ޑ6(ޑ6"ޑZ8|T JJ1x@0gR{ )ؒVEЉ,q\V\w`5ɪLA謏 u ଱geM j2TA>j, GkDM B۞@&Wz]0;"ӫҫ6ӫس2V_S #@`'^U_~3\yF2KS 2w?`t_Qwô\n_|hwCilt BdXJab㬚٫>|Dj9\Y]U$-[{ǎ$/yZny y|zut+_~610-`POGc=?o*_^ ֨@{ЙX| gPoOb"W?;46\Or:g&ʖ7=O%H樕"Y v!׆R:KKLQZCdčHDx;{?v7 g86Ͱ`.U1J"7HeLʁdTƑLV{oNg;<st+0-NVKiͩ(C˭>K-Z/hH3)@R+aOx6bIoRj 1 242b-m) @LI`x="Yvjĵy~TWn'8:4%ӕxgjO˿N=GGQ鿕Ee|HZvKj;MSB~ 65e/.'Y&*x,}p:µT%J].{a] 5(%Ɯy-5.$`JW:IF$!x36rK85tE,|Igo+9%8@1(S+0&U0yT0%j&Jrg%9gLP\_ 3c 7 `xU LxBEY* єj@HKZFt<%E En`]+_Yae2i!D|N0J#|f"m35R47*خwB+4S%Q +R"@?63畄IZ: T@f L4"M. pI넦C=;J&*{N8VÏZWG7`Yt[{3`VzO?Sӕ%igŜ}#U*&~*+U ]P

j@ aܐ 8<(]ד-IrF*lQ!ɻIKgpWxo/α-ia9p1\yNU$RQ5_X>rrL."j-1L<(tB&wѵV._ǟ>xswqr0{ k岟{w;)Owqo^n/_ BV4[ZK^U []\]mLAO-,>N_ۧ*'WWKVrU*+u4#X0WߍR1)=`w_ԈTA}eE/@G?ۻ7??̜߽/nSj X3_w`.QxӺE&*ʛ͍Xh?_.*(bʘ )@(~sOV+r?kì.\#8rTWTTPXQapb4Zx c+P(VaMXFb]IGn#p6AQ1Ia})X,g-D貦G/i6*b"AlTS By7ƹJ,h 21zظ}L~0Ŋ}*p *>L 31U(\ iu/z5H*\c0$)RDvNRc'A|c{ Kw#oة#Nd{SG rjiqTZ@%Kϙ+%L%+c':qtXqIJ sP\DIaQqe\$#uEN<7JF./AAâж=SC⧧'|ҚfĮ/E`l`Da L`:0H0½ǣN)L7xk-]0h&9#oyGM8 *vYo;@`#b zijTR9NXAH[^+uQǖ\@l饷ef t7qjzԆ.)<<ѤeNj QG!{EL0sMy"x漵F~㾁mݫ]ͫH¶e}>A&~^.|ޔ6֘;w 'bEijO{kH[#Z4R֥?4Fbm]`EHk RBt(lGWCWVDWXմ-thn:]JEuGW ]5,_ũ3.LWtZKCWD4t;ں-~U^\?2@1>;*]~zhy}׃.0œ"G/G*ک=%'%)B M/V/+^^[,'R1Fl+ʘcE91OB,DFv{+FW77dU~:'^ }l =3=L7))ԤTұK{&٩915͹Kʴ׉4٩Ŝٵ8kd{hJI, 0'eɝY5~y1ȴeILLV-;H<_ǥP,vN̯r[x;\-ϲ V2Qy&_ b8z&Ȓ33N$a-޾J1[Zָ.Wmq- ZxTs-k)+e +[CWW۶Մ7%CSEtj]!\uh1M+Dɺc+E61m:3UD7%]!]I-+<=Ea1WDJ6V]!])Ck]!ȶ5%c+mm2ᄙ-thh:]!JHWHAU [w#Jޭ #]YU D5 Zt؎2k69!sxF-c{a`]mRfѕـLGW6=eҰh~Z2[P~'r+-]ͿA-ƴѶ54 p-i M#ZNNӈRʎe-+ʴm ]ڽDv+D;UytjVD7,o\BWVԬ#+I(m2T+L[ tBttut U խQWVtBM+ՋЕ&"B-++[fM+DiHGWGHWX[5tp9m+DK(Yg #]YJ)S-+xk jBt(]}3telz{I~C nFh9}3M;8n@Wb& fS-y[6\dԱk_Հͭ>t'( Fϝp4<uYuن٨9˺lxZLBڴĦTR2tQrљGhbS&aށ 0smFE܇[ܶ;w!.v}I&Xc_dɑlO&AUlY%m$q>l>U,H;+ u~R/7DI4M-94n4M( =D\0 + +kQT;Aҕʂ+.G]ܾWkUZt[Ȍtu8tQ]`c+X)thy_IC n+-w%!j>t"ftAAtbRGWtEpE1IDŽV0ҕRڒBX V ]\YLЪǮ#] ]9E "`- \ULК+BiGugUbGL8eϡpU{Dڴ-Qaѕbϧ#]kRzlS`' UܻAKrK.) bhiLU4h%CiB vi. "UIp+TuMZ9ҕP?"[Lj튮+  J* DWX*V ]!\u)tEhADJib JCWWRК+BiHWHWZ J `WNiǃ(8"zte +lQW%ۡk%(aTWHWV; % `Y]\UL]Kv "]9Ù^NbF <]J=՗CW̪#*J[7]䣖hu?]Di62- Fzq[Vc/m cRX~P.¸ǰwpiWkD5;ZV y2+f6>Zo[0_nmM1x)SS` T޻AKrPPhAtW8s+chC ܸE;""ڕBWt0ҕyItE.ѥultՠ#] ]I++irj\;z ǙJ]hhE_C :Dց`LCWR~齯+D)` "]V؂ ;VN1brtE(zHWoBW) "CWR?Ԏ(#HWLAt*'vEp$nvDHWf#@h;NLW:F%BOyWPr>,-芏tҪn,W@ń|# 3L7ˢh;^ MSe;|4M(iMhZ0%%/3U̔BWvtE(&HWoBWYEAtE{_B\l)tEhPJ;ҕ+0Y ]\WL8ڡ:D -/*5ztE( X& B \J+BPZ3ҕ譔jGlvp"B q$u5s"ztE(oЕxfՋ#ƍs]l{Vp﹒ʞvCE+тHW/zJDNrO6ᑋ_1<>_hj^ݖ+VT.?cF+*霩lI̹9|sr11?׽g4j ,_%^j>[iC{N\[7pԷj3/<2Ĩ:f P,l$cdkk:)5xgf$ LCӻIW>ܰ)w//nYCE1zތU_',[_nF>6J R?]Eؘ{>ÇX_ij~x+筭Pl*d뫠Ve#u~|A~z0D9ސ7/|H }hlqc[gN޾׷)rv,þd{= LE/ڪ'2qtR]_|57<G'<).OXfQ| D56LVusxr*s|R!ZnpaGp{xMcQ_-fgom;QplsO,m t'E6ɝ2}>{Px*c4@Ňہtx#uEEd^V}7ͣyQ>ƛ{^oLn>ȏ6I_,mAo|3GcgkytPt_}Sn1Gk\\#`oWV ]ii;Ag:U17>49A%$MXTObr}%)) 3=Na;0SS ߿\i& =^]?xjھZ8j*sGH> shL^tw_fnt]/[uOH e I=8,j"dV ёTk:z%0! =q¸ N2@Jxg,HR38OK8\aP/fHN41!9X҄N#oNWt  ܧ((H0h JKTΣ_B\+)22m+8:`4c&LTuB&Y'LZhzw1d},j!3m )Ik 2r.#;Jp<(P2(x%; tia$51RB4-[DEe\">s&gC$x[qϿ8a֘$C %6 8s [8j}H&Oeo'Z} mݑ4n߾ f$wb$=зӓ#a%M%F3I:U$ Dk$ B8I'ih$|/Yq,5e^1{%M#y)A&O/N&^t^q[angl}~yY g쀔9 WFktIJ Lt] WdcpQ~>){^o˿YU Lso}=W$ݯN֚ERө<-HǓ<\u?"'r:Lލ>X̾_/>η˓$hq;ަ?\ޏ[Ѷ˻X7Wvv۟(Os~udtsAIo M mGϴ0[3u-^s%ޗֶ|q_ %6-rTuSu r]G4V7z'hW:J$+?_*?I [+*Yb5 : (^,0}0=Rϰ$@X%S.LL$0%E R YlZxѐcYB)B(9Z@]-AײF^V wц,<q1֪bo-xB<"1Y?b`[zZaw'8J@ ڽ9{̟[tVgy2H#=& du崮+I4zP_X/ WtbTq#9sP1ڨ"Kml=:E28Y`\Ш Uy D$Q"$U\ "4'$5p<{+2۲u;كcu\ Y {5n [79>%b,7zgegu b,W*s29i):Nfkr8Pu;B೐fH":{0s5LHudu^t ީv\,.}y7m_=Cxc{=>ET>WeObL20-&b2 CϠ'۳ ɠ7 f]*>],*YWAqVr(ehck t cVB닰tJP3rBr Ǽ֘(mDׂ LgeE 2Yܩ9kUslrNAb>Rf5KəD T&pݱg&=:d-&{Ɠ`01 "L5$8eC%\]kUP|ݿ16elt~|{85&Pojx7u˓h4N #"azfp`傠Hmlkə,o%IbIqbwzT^eO^$ u} `ٱhXkD[B sk)ɺt5vOG1 h_ej )ɗdPGv9Lv{p\$,l"=ܒ\n6mhCvCKoIp[.xYpeKry} Yv![w wm1'm36WP0.w$n)}\ L=OuC/i:g=6^Ll; lzx7=FXDYȇ嬼WtN?JrعޑGs4Wjzh-m?wmyq<8:v`LVRjvsvo`Ok6Tmi@1kEϬ!)ϭyŬK. PYhK! `Pw vS P䃔@Uݬu[ܞ٦%,|9=QChX$GT`gU Z1"+|Q}YpD(k_#0@:Y2Orx̥P xuh 2HtAM &vsTB CB? yݔW^1Kt%WWjߞ}D\ɺlYqJ-ҕxm&`$]`| X#]VkCWX4D1`yNL),Gn &k։gZy O/ʺ?(A'Fg/L4Y3s)` 1$B2^&䓘-T|֋0yFq+/e.8o&[ί8"x=R*srV!oumAKKx>-x5^~2owʗx޷k6݃]ߖ%prDu\M{ݟ9.ߓ[bݖf򟩍kohN7g%!j߷<74S|ņ~\MW^se˗^O.!b@MH~FcfX~sʖ-;G1'Ww&﮷drq$z]c66 A2 4nwŏl#5A6R'%jѾP]n*>U8@E[RZM 92'`ɂd!E1;4leE^Y17I践ԃsZ @O|.ڢ`gxX⊼pX#?RJc;ُ0MMhL\%hQb`d)ŧUlpf@je:*i[S[Ώ"mzҚ$,f륈,5shF#WJEMrRx%4d PRm4Z8dԆ8}@"B Wʴ2 AQ<ǚV A:cW-AK96m{Y jҴM^?.vrHMل{Yuge;~F%WTbB)5A.mp ]QNţMdgEҤCR22C4+0DƬvFeLFhJ]~jB>",Iei r.I(d=ɳjZ90+Mdl+fs7ՒTU%"m@%h-$XHuL cU 1EfYCxR,ǔut\d[ : _ВeQ#m޶XUIOđ JXֺx7Ye `;Y`KXc̵sT~RGC!DjQw$j[4/gcQ/ru5?C>FDS b2ߖ|CgZgm>3u/h};-M^d)]__>nF|IZV\k=nޝvZӏwsJZE;2SgҦ%Y*r2g{]OB[.'$ -ѽIh>޶H&y=-߭2ӽ5U+[lX%B^?RۯZͤtՋZݝ\w7׳Eg~yҥDZ$ؼN .g'y#^!`vDNe67 rh,DQ#i۵mn/BfւQ/gןG%l7Ю_oTs~|eqI9[o{?eG֪?fqq7ˉ: R_7qu A*# *yQd/6IKG%!LIGp:<<<tzN+KGf;CB;Z\b%oRaH:B-!\Aab`N6,4Je@EI Q3C '`v8G9Rւ6"D3 Z(2Ws1HcF4: \p%>eHB\)"\ĉOu0%uOsj9P C*`o +wm2Ir6Rʆĕ͞E.#ZB"("xeJ~+XUPQ1I1#$}yxyDHH#Nqt0+ j~^Nkx]pN(xurlʉ-hNj>%}׷xe{58nGXR7.|@+)5mQsE3i .$$,Dޢd)@J\ʒ $]irV`1n$pdXm8WVQXHDUB?bɋ[.o 3ޒͮ ]bw v1 ~ňR$ݲ9sf,ug Ls> B\Zk8;giCL ZX5Qj#j+ 6E*$()-\YRFZ'eY>8i "-%IȢ(\4D u"6VjُR*Yq(Xm}2"GDܙ] G>GrB2Q Bc]IMJAuN7x@Z}w(\H#wMDt2g@*ddMk 瀘xdEՊm.*0ÆdQ*6kFe/ DU|.Nvd,Hx/Sz#9mtOHV:df"rxlEwbP=~E38DV'Γ# Y4lVB=`u_++978.hdmc Msr9QG{c"G!]3``rOfټv9rGw֣ņj-JEQ0lshԭ+o]U]EG Q  FY3Zt*c997u嵞DN٧C/tP!rhz?DAzY,7ny֮]g?ι@Vh?t#\%>"-龁FVϧ9>|X9zc4c=mTZ{HI(Բ}>g7h|#-|@\zb:Cҹۙw>VrѲ H`iٮ@g ]Tb~F\I۸$O ZWնx*)QDY-!,ݬ qw'Mm[ju"ʅ8}SDFk*YMnu`y\(|v;;hYqKA^eK nRv ߱զcҚ+s HYčԟ"JDlדBL5+X694䩻qo;QcȁٟNg98*@Fg|op(q12.`_ؓb'?mU6Y]ATIFv1A]~XE?Mʻ6h23hWQF#}㝵ږ4|~w~ _gq\;Ͼ lXy'/?Z|Eb,bׯuZ_,7wNNf%ɋ￙,ߊ:[t^;7NlL=>9y*3֫J-<[pu7}dA?.\dŌ$W WzDş\>>ri V.k=AAt%$qwRg5f)$6j $K[(E櫛n:)Aʅ y,:<6<~ʝ:lt/ JΧ )vz>qPq*[6tdvK!=$/#pwX/~kc. pO6x-Y׃nPTs+o2Oqwڊ?4LyCg>`jXL]ū ˟?GcKB%Yk8]5&k#9r:]wvE'Kbyb+ XޢsշIF[Y@3\J[;uW#гU ̣`:)9EBet,?)ͪhB-'Tra>\MiXT[&Pck6d7m!ZFL 7^lPy=&[%kAgDMԴA5jWթm Z 6FKwurtxx6ޙK:Yixbnhu"ɻvz|J\e|HE˛|N{u 9feee0 ԜADU^əJj3kjH%%VƠsuNFl 7| N]@,M%!zZ6'C 4"1w`ڧyzЋQuD!%:.h 9tt%`v5R Q{pcY3m>%(Zζ$;V@r0]2| ;d, i G匜={v8=C`Н@@De#kvpq%clc:LċJL PT 7 ת a¨4,J 1tL+(Z* sLk2B:jˢ9p(hW]`b Vi5:9/S ]% 杈Fi*[&@5?LBJ x/BȀzg! X#52 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@ ZZYðVi5~P~R&F7#H X(GD\q<H@JM n;c:iѨ"fǢU-Eu W%4,>uU5XHzz*Rr+Q"E+X {4ꪈ+VzHP]Fu%t/G峳"޵Oʰ'%1w'[7(NV<̣alGGB3J2J+aĘZן?3G4Z&쑍Ue~SyQ7h._O{˩g}Md,+(pb!(iPM!G\%HtʪlcJE)K `h.Pkxͅ,/z -^o_s'hooN~jy}?s:1gW{zݦrDzM.A$5$*Ya3Kx YHn5 R?f+"ä~tg?x*j)5Ea(Jy&0j2ozOkm-yЯ'?a֯'n}L/znrQ_9BiF9)%S^>Wz٠&_/!*{BpzDlǵZ21z,q"媺Dzt8 /#/֔Nl &ͣl 4&PIy>{eJSw`A\f\%6cN$bu^Jv$߬8?h PUF͎g*H0:{S2Zat.KiP RkP3qvC9ezt9_Vx'_$hi0݌C߮`k,-՗[fo+.[EI{cLf靏x2p & 81ivf8`}:M䝡oa92kMm7";Tp z؀oѾy! ؓ70#&4lPRՄVFz!rV:Б9rJȕV^z[eV u2hA}Hʕ%3 R|}:}}fۧo͇p@"큱#n!$_0g7mz7%=yھ&yie »~o~-бErs힗|1W|PفfݶXseǁ;Bd>U'Y_*'qj:{vUXY-EaN&[Em f,a @/!ǀꞌgn?Two}XkwX|ypje3ۺ"z!HU2OO\O`T%+q9(ON*f #γdU ^XPPg g Ҳ'OWV\IpY(q<[@\y^g ވy<̻wU3cbS>-v:^ ҽeys *?tX$HXd T1hp0BrD0R u8)$Kc8'2-q0 k5((#uI9qNߥdfoM)zAA༷TDY&1Z)ϑ ;+ҞqggXu˚Biz >HB1ӨV#/Ax)vmp3SBzatM2XGXtwR;ro&d Tq/ҥ8\>>F,TXYUBi\dĺ:|土U9%vlT?~0Gt֛uKzKbEQLł}F{^m>vPMōޔ/8MzLفX `7( !x.`.E-ĿϺ3t{&Ojx PU{fy lັm!H-{bNӜ4̦L_ϖlMAٱb=A\ߣŸ\d^ٝuH*kϪW&k-V&#uJf{եOp\,(]AtՔ^=tpE~ˁޛuPl_]yYxwM=j^)}<WN{}]63A!n֛GwT|<:/յ*JKTܕ ͞br$T<(xz|<ߚ1` M“xi*07^e|2l5O7u.˸?M UК述8YODQx9F_6']T{cLk)'\De:BY̤']h2g]v Uknw 8_ȎRJLY*,W,| yg 9R8$ĤXF U R&P ׫+G$8>IbP0入,)d!b,Hz%^փXN(g'7hdMޏbo5;gsx)2KPL[8Zb`TaVECJ~]kor+ }JՏnB81l0rk{\.{š4I3Ùڙz:5ʱ1O(JJ4ſW7Ct^mvQ EM$~wyzIeG}$|Om?ZZy߻&|wӦSJ =y <0:xM/O,i,~qlKA=պÓ+,ON}]⬺7ߨ)ds;7hà  l‹=F:g7Q1lT~t|z0dd{uoMgW{n/vEXY%rO&"J,-Sa/݊ o)].pK@X^-a8zGmW9X~T`w]~A ttǚ>]l͢CZީ#ׁhPLĘ+Vفerp⁾G7nC<'])\kUi(rvjܡ=w}2>&ޜX,P*)]ز:ţuƚl"Ⱦ"4E .cC}穨H9($F6b w: P. 9>.Ha\,os6m QC*ofj?;o(hlN:YD"!iA9hb..[/\AB S_QNӫqׯ=5[{m^ŎkR?k%='OO8醸pX|(U:IFRg"00 0UMbS ,jQ%jsvC;;@VMv?Bm9\c` [X,赊R`F8axkYP!M)u2d .u!J%>Ȣ_f< BPқ69|Zz$vSv#8Ɵ֗u\9O=ouç^Iՠ>Nkfuν0p"*"kuJrnN\bM¶k&P%ff;@jFAz?/ea/XR9)TzTQ/XQjbSV3c4Ik+9 ^V>3bd"%d+$2!BcIBh8jNjO7~aLӯđڥ!9a!$)H#)m-FL.%z\l"kBj- ?Pk| v0 Fa1la.(t>(  Br4vcHCfuE:3o>CU/etkJ$jɪ a,6ybM%wv4*E@!M5"tvNeH54~΄߽d&JQa4m=eCe*d†)C4\&Ȍ@f Jʙs4 8X_l |lSV#˒HgG7bv_1e}#ќH*2FEhB/8u( ̈=f-"gœn[as2z$r@D[9ΓNk+RmȥsDLM4?nv}Gc(&2J xOuFm}%c}^{NXa,kk0Ur2.2LT\[8$+#9&EE#k>ɼ)*-`[)^Orqa8[|an o\E8C 4.0F逼ءRс8777hikzIٽ>,1ellhEՃ3o50)|U( H[XBDwBByԱny!ͳx5@Ũͺ±Kg/SZX.b>tb.2՛nz^b3oKQ?xjB/'8MgłOrj&./x?&yV$7*YE/Uz\jg}̼%G^-pݫ7XAxC?e}!Sx]<6~bE&ow+t޷jJ +r:~f8rϯW' j3/0ᗟ' JībtB ϕ5-'닫 _d .՟?zyF q̵ӈsdPt tJ萢Yee۳^>vLǓyp4nSm{?n_]xzJ_}o٠EF@"(S -]HN5([VLNg钰Q8"֌o8onƷN烳3f+2/eWޡaUb֡džu4s"y I;PM@1][&밤T'E8TTz2B`*>(ξ+sjvR=c3qJ3_L3 Dc_G_x!󫊌-v]t/bPW!E-kl$*f!ޤb}1)/Cє5C˶møvpkGI ҪMq`! vRx[bfuqi+qgҸLK//~>Z;_Wsv" ʐr ,p,Q@ :~~q(vMchpaƖ->g|ȝfe 5 n.({ܡn~|Gv4~C cIԤ; u`=u4tYsIfqw$ 0 _%rЭ١geᗇTԸD m|OXm쾟)2]k\=Z۰fŤ_az%ebFLx^|86,y8% vU kC,|}0J9ƝY9 +d]u1ϑ,9ml:+ ޫZ].Qm)D6iA"\V>a'9~ `u߼,}9ajQGBRK՗IJ6`V&Ԉ%B6H2)=_KFdCRl#AbrZI5f3q65:ն% Z,'o@zԡً6o8E]㿭)eAS2ϔ f#[`Voh#rA><Rʓ fF^,8q8Nsl5u J%td,ZPkٕȩ$gNl2d=i8qMY[<Ӌ[˭3ͼ`j)j Q* =(++]Afo,DTF^s87t7O64楠)*%:@PC)-"BT @g9Qb:`X!2IOSoFpn{W1(eso(*(6`MH骸tHVn₅%+h;<4Bhe? 0B*Y*.LD'n;|B,TbDwz |v>If=1 _:qNg0* ZZ+wՍח<0- KDœ~uON붥zֶsΑ?W`M e%$tB3LZJ?A VL߻4hVf;r|yI%ɹRGɽ&s$ac0tӗP bŠ*vG_uu/Na8ۣ7ަ?>_~u|^Êhqp(/AEy\ nϣ-V7VPޤhZ7 ~6[Ӈr+cKn5(R|tG5,W6ZffɅb4?Ƀ5Ml*; QKڀMr|PmD3iocokwGģl$ٝdm$hDSPi}p ~0Q%ܛ(q2qcΆ+6:} <Ñ !rLPdXJ)}/wOgƗrX1b`*MIYW&Ts)ga;#m+lg)kN]7#q+0.{MڈV0#*)dFRi"` YQJi2ᵁ4}k:{zMc1F: lp6HFjRz\Tknd[da L"1ٹbF=_xENCsi+/oC#WZ޽/o~KM;q [N3o5~[,n/i=RPfߙMхK`x qsSĝ @|~zf=iެ8z.?a^T DAT ,Se5Ϫ58Sy puj|'p%p!,~Ypo mɖ?[+p_u#Od{VJ74nVNT}X-lENsSkR&w:)3G@E}*겎+"鰞;/`4WTԋk7X7 \RWIzv'UW]QWIZ]]~u|)l=w{uU|BhPCCú^u*C|4{ &cE/W/Tt4I&tIY*nq]3q򈊽]:Htl֐Lox6tN[x__klR%\+T{ɱ$QK-9+R 3|NXNnWX8c` Œ9K˭N);2zYH{y^~v UD%2D[JkfLg7%0Mlj&yLGI).?^1IdӮu\|)5Cl;܆MY3ٮ;w8 Z/_ů*qA%;Hw'&izaCj: ΨJ\+j:I+鶫$YM@5MVJk3 JWWI9pVW/G]1hLwH]%uJ,vE]%ivu(8OVSW !c0+vLGW Tvꥩ+\URjN6f8JnVWSUe'is!dӦ'-Ke/Oe D:IC ŭ1%V|4烯az`;zG:'p"MQ15s| SC- 98HM1^{So(VɁ[ {խ(ЌN .n؁SrcPʹQ T?[-^BbOz-@*nT`hQA*U*c])Ѱh(&Rm&oyhYWl X$K<@&崍2:P# s vZr#k8EsLRԦUR 8M(rneZGEdQ0A[R.k%!RHDc[~h"2$hyQ$=@q9n=ֳDWG1%qI 7BBƜ{?.lw}*7m)|Ѝ%j55?m޿S>^ov_1VҔJ`"k7e+&9Nx0RmŚdLJ󖏠͛K--5#_h L& m̄)镮-4+u6` $EAIS2Yi,NG!*ɖKt[7!oBn1s QGфFWRDDД[Z٪\B!Ӊ6[X1c2hnZv&m;U6QQLB `ubΉD桁p1i=f2j@$60H#h&Ry.ҵ:QCRt Q',LI(R ZsD ')c9XtmCom>JڈC&菇Wq%@4aLO6R F$iU+? KJEXHtkDZ iGh.(b@1e Q+Ԫf.z-E"| .VI%`}{#32x5sf"(5ckܯ*FJp՞.YnTN%Ju_&d$}}JGIȍ{_?P^o}z';-Ic!eVE4Qȹ0hl#5d TZF0C2'ZK(Ijl\'K`!cNLڎjK5-ZѱmjvɈw`i(ƬA$spLgQsN%}\☁ 3+I5> 'H !B&> @SJ-Ĺ_VF&)mшU5"iY#F|qcT(NށQK6B#@ &r nKh00 FBbSϤgN X҄Ij$f=6 v[ܯj zqT6k|U"kY/Ҭ^3VR PvT (%UN"(2&FǪjp*lJ.[8Wk䃸/~|Okͅ=> .G?>OO-'59x-etKw_h,`5z1|&9mջ Ƹ$&H慒+[qDiWr~2n#)ҚRy/mPqx9|s{0+ոl_p).n1e$ZODYtK$g+*Z"bAʕG%qs͢ZkB6:/.R<+,8ư<ۗо-<(5 矾*2QQXr}=Yx Ipr*T4{k#WL&|4L2Mhóz(ٝ \Nd%TCt~g/\K!k :9r @*$(F3ŠZ'fW[=]f rV}~筿ɏ8Z8Z7fߟ?qqi}omkU^l;;٧2S|_ >׻ a+O!\,trMLSlzwu0mK*rGay2no)eHOtZFXwc,G_ʋ695gJ>:),J ˰!o47oGI_!7`a[K7 }KB/'rktt쭞|y aW,pnvrn%:;V(SJNP[ڛk!G߶#h' ;.Wi:6WӮx&h W{ҽ)7³%<ἰiYug3Ch~r[F dZХAuY(&)C2A2t!zWv`H׫?87Lqðv4l mt1ms'YaY+£U8T"ړ觉.Տ&Gc,*[%RF)dj,,.km>d<aŎ.Bh2IKz=eS={6 `\iMuRlDB9WJEMrT`3JUA(6V)B)>d@29;ƜV$8>Bc%W2F #rSckr[z(u_<.g/cؒ>INVjg-_leݲQg]X@UVV> G`L!2blM2$]&5Y1 3I|F{ @>0-vu9gC"UTvN P AO5).e)\"H2s +##ChB]\RFk\lUQ]U{^W{>ߏw餆R.3GΐvNn5II&DC\y@`ˬhI%$$5ŗz~,Wm|PV;GVQ؀ ZjByA`UJGqQy/c 5;}U%AK٥6|5i&k/g]:@gr6QI,L=w+ѫeo.p«ׯ%ׯ~OO>t1}zFg?bF#W8?~QkԮ?J҂̔v66xY{R g}?[Wo,;6?;}PPw$OJ=zC7J.Lz$U(=u=ȆUތ~X~~_V^ ZѲ1}gq9WƓ7?|?Z\eR֑xU]v>: y6Hry*WT" :$ɂ<;uԴYDHJ`J(c<-8t3C5Yw/,=r/qNS^ g)U[E]J#JR5x`Gp̕;R)|J_; (cPAD=4dn:f۫tZWLK~=Mтڕs$dEGߦIn}޷˵x^7R>_վ$ZX7\_+Yôʖ 6-( A*@A*dl(m ahzFiE UpQ|ыeM`^@ IgGr>[wKA1GڦA0H`L)}2*YbS F[Mjͅt"ZeMV~5\;[5r7ݖB/ e+m}.ѧf\/G۽`huHh!8@'>C t tЭ#ɸP5`!g7 gLn4G0pQ 00mڶ*Q*88hJ0@(6F&el{Ud K3Ý7 +{  2$E$RE$¤:\8#P H\eXgC3M~tDɘK'z!6l[M3t\.TPׂz{$H d&Gs7)b lIYJ`SU3qN!$Br: $D| ' !1cv8e5r8l2Y>?g_wyƩ{tVܢW h[;gX]9}D /xghv\z =zZ'W#׊WֹW#W jߥ2*ުLϑ@U.Zc 8އL݆F+uk0XΔWrr/H:f˽xܱs~<{Rr^g 6FJ:xGl69QR,drd`і{ڧH uKYz] E|/(tV xHd.tVCtQćr<ss37բZ.+V%sɾ 9_Ȓ$`G T=pk/:(xAG4K16bmF %%0QR" (|Ȫ"8q4g1U-2j+kQA5L(M G(HβhAH27HLFGjH-h%2 #"/m/ZdjRP7ő.*U3W W[s8&ɿ?pDT?懃Uj6jLa&dvgͮ |K Zc8$?"mrHQMQk@ %%88Nkt 4aoTB|uauw3?șD} 'OcL?|G'ЃHFmy^:<=?\>ȏ:\]Wo~1tF{cmr힖gOיOQcq/$f;xH59O'2=9:^<[Frd2;"op4b%[2NQ47H ֧XGAϯ<9=)qju^l$|6l|zvirV&6停-+5筋sEį'O Go|~?痟㛷R7?#Ο \RنWw_Ӂjo~ld>K^y/ͺW{ :1_0?y p?T:g%=Xp8FR]bs(t]ÉdML01+[^8P|J&II6[b:ٰ?1%a Y!y gȢ$9$'9Da `k` *xm;3{# &M`z| W7]WAUϴ02,ltzs펶__\-#W뚈/Gq'2%?L~M.."`Z~vRvOd|%QFdAϠtďFbNXDNX s)c%Ӑzk%#F SlGL zN3BjoZwGB}]zJ)w޺XrFLw ~ʇUe9t3SDL2zKCtR6nn,4Qi]qߗ\/BdMKݓ4uiKʪAm ~c!%]K}B@Boj,Qa\*uBh(r [Hy-hR ]^ˑEhB at & .NEyyg?TD2BxJ"莂Θb:HF2(8#1q^ obߟJ1[e&3V@De"*I=V2M1qZ}'4qm"gKɗD 31$U=(OAmr8qެ;E}v'yp/}Z{f?~JKdF2"R;H! Ѹ&jē>X:34?ULb'aA6ÎJGR .}YN\JZߘ]^| lbh׺gk+/NO8b[rHixjE"Dq9Z,.k2 %1-\[5{t jNIiUacLr v"RdbjVy 7B}Afֱ-j˖ވ#QZT[94@y 7}rA!k)Zjl:| VJQ4s%X$SBvL16x+] c[DTQ8"}/ lQ9S>#fj00('ҵA $3e,Kiř)CBI+5xix1"6ÈU'yt}u6mq4E=∋V#fstH˄YTY/pQB1uqq/x:Cvx[j}%c4?~XEpYӡ~(b/>/d?Ѝ; ;"tHʑN;^eww{QF&R@.+A䜪b:הʘcy"ڦZcQ:!Ib&mYHh!&4`,WeNUwgeqKhEQuu,Bݣ55+'}jy?k `_'0WeV$fW/H;lr8/Bv7νuJ6{6wͮ2FJK[^pFjZX(uBUk7LF IZ{3ƤjbR5:tcYhRQPb%DVd(VLY`NhЁd(EAC;\8Xi&7#/5tϦ'jA]Jtm$t6l#-)Q6&b+&$4ޅXU|)4l3@AFO#O` jAYhwdDdo1$0Ve5WkPo+\&ևe(Pխ)ܛP#EPW k[$bFpa0ɓ-gB'ښ-4+3g㋣6kD:>xSRgZ"^Dxʅ,4[V6#;z&vRSRp9bf|pLtI<őyɑ*cٖASСkLvܖ`FI-$lEM.#OE*l2ZYoV΂"R0kUHȭ庚uarĘxqg*][p7ݪ#هYOX3wɧOi /ȡL"Y0k㌶ʈ*lQ@m4zB|6q%{|A@ɵS`T5{-ʹߎ37Kv߰GϦluG˭]ɘsG-$ ZӴ#iJg])vEٺ_%Oa t&ٙP޶Zi kyvR6߀6f ¡Fc J)Ab.9QLYc|RPk-wUX۽^9^أ|8xL%W̏<& /&cЫ__gfgg4j= mq_Q)l8<<&]n2θO*uf=];ަz ڤ7V ,L|l#Hiͩ6zUӿOC-1!G謒31.$ '|1G--+Z$ ?3h?Hy"E]IhzUAM&ZS*X\Z";ST~|s?(}YѕLO9Cf3rxd&xK>r ɔBr򁔔TrtQ&*?{F~jA r@_gtHES5Rԃ8c"ٜz֤9:#>V-G)C@m]!==@/A$M)}4.+*Pt"x:ĥeВ6&i@!ZQq;W|j0slHA'\;m2]&9IHj %{Fٯ,}d6:~.8Bu9[7#J7a_j4ũow_nE?#L_>6M:I-rD7\ѓqq-FMno#%죟|J{p3x{M _3tz>~3H}anzo Ϸ[w7nFS8mnyPn~>;Lk/z 5J.{Oߏ~yx D`5nަޏi(f4TGȠ]I]~%PZ͏i4-Ɓk?./$Fd\e ӗ|M`^ kmÍ{s{ϝv*+f=hÏ1iӬ~yy8+=lmv _9*V6zfzP^ H̆bQpwYAFK+ni⃖536Z?ydxhL- 䴨̌u,ɳdd>uq >X&;djh򩪝qGZ}_vksc~|iD}v\nNPIùQVlMoʷ\R:!^P*1o=R3 ;*̪ע*+e\ Ndi~(bKh`R@,gQ.F*uRDlA3ZABPؤ$틗^[l-hZx(S,ev!Cp9!q4R +.663qhj69/W[uDںR/_Ž 1*qm!Z12bMQ["S 9lA5Zddz$ZAys 㝶JF% Sܡe\g\̞{r%pյB lj9(L%lcQ l%N,gQj;ŖӆJg:RV^TGV! `L11"l›lPtES=nfhSti#C;?B$PJYpYn])ŠNYYXQTTqN VkV]v`'aQc"BD;LDqH2j!l L{P^SV.K"6G+ѩHāHC5TJReO 3ϼv Wn]5YIMH- >.փ/EHl7iW)rA·HxHˌTࣖ"ppA*TJO!3 AA<2VHGk3Pp-K;G'߅5ؤ, ^.]/W+%HbNGDZ6iNޭP|Eh*Ta(bEi +u%@,gR$$i8yI Y)%J.:i0`uVʨ8jJ )9KRY|pJp66<o6) ,.i!YKr,Ӷ~ɦrJ]jk-˅IoCU2-U(QcYlq^zY) eTƌ P8/R{H @Ѧ޲g_G~AX$,iׂT7  Rg WZ#|֕C5.G: Ds^HLD<(@r*`VQ68}|Y4(YnK n:1,*%YժSz(#򐚬G)]u>8NP';Vߛm#] %*DSGBYA"ds0H2EtU Ux8/-Ke~H5F_/x4v.RB֔kmb4zm|5w~>6??'_k R*'̏diyPPF"}n9RM[Nl>PkZi)TӨT9*sgwW3lʣݨX_@UtR* *V`" 2[]Σ{vʱ1}O>p 6*]4D  ,9'RFebDnԚ )Xњ:W7EeK!;=xWDB(dϭBG4RI./Y߁M&.gYo{zJ{MQל W9oۣe ^S;\pKyxqkv}[s?t[ޫn=yaWA (n+L˭/xSj޺_4UlW/>Co\;\y@[|{3wHȻ7}W\q>lUbXӓ^o8/zv}]طq)8l}?̰1p- }@&/o J6qc,=+D>QtEh:]J:CBR #B}+BQ"`:CeR=+ o mU/tEhM+Bic)UGt5+{c ZNWRꁮ;`8=}W{6B{j~h= ]퇲kv]فz-Ϯi%}4&.x#&Д |P#8_U\F>Lz_S>Ykr"љt?HhrU7 D-j:)t2;?_2UwP%mY-%TG BGvo47N'yq TR%*Ot\ǘ=Эˏ|I;Fʥ(`uURP)L|*`P5 0mLe]y*UAWEJQփMK] iy%oqhc~Uuя+{uύ Qe{-39#ßd6opqN4#fsFoomq=r- `kMo\ י-uTlp-kA]`O(]+B@WgHWPM!{CW?77q J-;ŷ`%Xo*"ݧ+B v3+m7tc7՝7 s+cjDT [+ 7P+kX ~pEov r]+Bi@WgHW`:#B`@p7 u}x"rЮΑ }+lހ_ۢ+zth_ftցNWsܹӟ@`~pOMWu'J;FW]@W=|5_VǢE܈т^ $k3 \h;zCW4:MsgM!M 6=+lO5n?Z-:CRL,])&+ByPv-l7+ֶ͌Gt ++{#tE(aٝ#]v w++{]ZycPZ3ҕ‰>`ӟQ /tEhOJɆѳ+ > "`ڢ+۟AB t(Wt tu>t8^z@p ]ZmNW܎C.';yj:j?rvt{Еy۽-@V^.{Sct]v"Sf:{ `C>xkJ0=ղ([F㵖vW PxLl|z~p틉MhM( L34ζ7ܸ_e ѲL$E\x/;UNȒv.{Ւf{U4gOVWUl'w9鎭$'տR:LA|1fJ4~fS{p3c`Z"p \Ad,WUZ?72W+JY)WKJjb͸:@\ePrpr-WɩUvqu8{_RvAbpru/Y匫ĕd+Wl~0J+Vҩq*ͼq kY X)fZ;u\ζ3WA[WT;*Aa˙\RkA*^ ԉ}L #BIr>2 UKzP4wŪW-t cYm^;Fk 4[#4zK4i2}jbZ:`r*=u\JC3W!K[_ X7J9-HNWRW+#]X~1 |)bLWP酛qu2p},ػbpb*5u\J#f\ ”T PNv$y"jX3WkKʮXp('\G]Z妎+Vi'+-j!;*W,wcՓ_*qia  ֺ\\SLvj_z׷Uz=zԺ3s}Hu״:?_]'XMZozWgБwbZwxwG'U -u2DNQd/*o8ۇG8 ^bt}vӫ|~u}so/۾|?oM{ǻ#X뫘dY2PȨjOz_⨿9>2|"_K/camvd*XWm.|kѧ?ZJWhDquh ;ȳŧ>)E/οH`S ftFO :/߯~Su~m_k޻lY, _VZԿjcYGɓE}uL T[ڶG0nb-~w 7}jeItSBX]=E2z{li lQO9j%YKVg笘{lҖi:s &kǾx2uN2o|NL =(=y~`ZR_X{d:om8]tjma߭>|F.C\|-ͮVe }U.L7G_ytWV4š 6Ih베4IRD|&w<~/O&ޞx=p6fOwMtNk@|;~w/Q{Juu#ՙޒvSIduO 7_n*1?"r/wW6::#t6SsǨN-r x֍N'͉76p퍇޹rٕ_+ӻ#U''̖Y- vaK'[~x{~H^,M~vgwשӯ8at?ss-od3=Fij0,C7ɚ7a* 3 Kg:) |dοsUos|)}?))p 'Z$BbfήWP+.W,W3wj+V!+OFْnZ秎+V쌫ĕs& r{y2jAVŒĕ[U 8P9 Xj=u\J;୔%MCL\K5/Y2\鞦׼2͸$XҾ2 Ul)bJMWIsvuzœfʇ|o?qAۍWz]ƛŻ{)Ͻ=p?]iou{_ګس-¿ǷBg5O]lMFy!|ge}*X >oDI-?No1ǝAyov57ѐ^Ŧuwn}.c\m"~+[}h^/-#ƸH>r|w-p 5x}?{)00B]F^ {C}}ڭ u^U\~b ~'UQj7Wu4TˊlUT)KɚH6QTo]_7w$yWk\7,h Z^{:ZDJBjltM[;QhWVW2Q @ X1Nu催bg%P9BRLJX 6iqjj% kҍivu6HCbsgtlkr#j8 j,N26"WѹNhsU-Ck;Zs*𕧐s +N0F 3I4!*ݪ]C9?\ KMcL+k Nײ1ɨ NI,D$\Kc{Sjk0aFJKf䘍I6oTd2:M:$Mb! &8Z'dZJ AUMmirPZ*xBFIUT '$8}2 QeTƻ&E,lM0>7 .Xx!@!^ "}~7g1ϛ,1LFMP+P2TX_U:$w^'%b6V ģ*juL17DF(IjX[4*xX 5(%3$<FRc02 z7|'RJGIGqu~Rb4ImjJK#]0cJBHZZiRCBJ)m%?:ZoDԨq1עvSBj)ap1H-5QFn Ĭ\,HXo+FYH(1JѥA L$Et%VAm덶AcծBlH!%QFn8gr-SCjYgHrG&rmWl]y@mQJQj4:µ1M6m:* 8BjlXغvZ65u\V'FUB'$}XaҸ>k 뺩*۶c Ȣe$qu]\#R"tlLiX/Sl~՛ԘZ JĊِ:Alp`P&dz+ RSS,`yOtU{,(R*dߥ YV5Rƣ23ˈ`U,)ҫq3zCLiWP0v؊j،T .SU#~ gd(SP[@ CSB"!, LdѦrih4F7VkHT!{Yn*<[b̂#vn>d$H.t.ٙ^ET HvoeAD&_'Jd޺ e*2B D.0㶊EUWpzԞ^ i~:PS] }V ]F)x5=jօ$>Tț7"ȫ:5RHtNB)㠄RrLi6*$^s° gmBՊX켵&nOdPgzLx _Ԯ{ٻFn-+Bc@0dy`~$8-,y,/ gQ%mZV;t"T$7"y2mƬ*I5QIbYj݉s!`G:} L yX`]\kӚ]8}xy7Jz=zղT8Mm %$4>*bu`8me*2YWтҥ`ruARn҃ÍdHg0 vB|^ cA1J0H1-+"fP aH,ڨ_h= A)حL1&P܊ۀ8ڂ dA X?F} yrx*fE @ɜ.+I!Ir'#+ک`[b6ڤEs'SaA[w5}m6F8a, {;KnCzhC T&o%_GW*#D ]J hvGH!x hv)0ָcFZ`pAs^ϠB.dsjy+gFZImJC 'МH YcvmRWAqX(&ARQbyc"CÕ,By+ s.|H^8,*VH>K4ؚ5'_t#XfH{YTgi&0 ը(UCJUΪrݖ2+"{ _rf&=G ]$yͮl,y%TF`- \nCZ43ϛ A+0˴lZ6k2 % D][TquU  Ipd*"mtnLv#CKQcEԺZCm58EM

fC <@nxВ]y!SbKhtw0RTH>b=m nnsuяw:],'\#?ă4{GV8\|ټi1Cϻ8}̟쾵ܺ|D\a2Fe.F=ml 5:N=D_hqc\T16ukB{p!J @6^@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 j@wsC: 93ZZ Ԣwr'g z@{)99 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@ sfHN !78Ĩ7'•f0N D;ڎMN t&9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r;@,H;t8t:vN DLWA@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNrkb:޼ewszCܩ<_t HӮ2.!`cc\B ƸfKmK¸t UU  Z:\($zpՖݐ [W-\ÇW-گW-J^!\9++ \hwdցUr\^h;$BF]p \h7Wlz}ڦmw\'%[6h9>-s|֖gH#Tjyd( K{]!Zn:/i~|^:-}+S"yVaʅ9 7v|>?l~WֽÏ柧h~|~WLY/A/̶?wo/+뤔S·\PsJggLoP٤\qVjTMJ]~o=#eQ0-(eMX)uY[)nkD "Љ=/OsT`*nƢf1S|R\YJ> ^]jܮpU;mQ"ʂ9j;~뜛FjbD)rV]rN]|{Zs J]~\<*\JE^h{< Ib?4mUΎm~Komxm:EΞlh*!FUN'+Gmt1[.ͨ gq 3DB7~zbz00b(0ݢaEi=+ih ~ZZ^:\!J-%++Ŕl@p\p \h`D"zpzHp՞D`અPE+URWWyՀ` r6vբEBpz=[]X51j]h|jQKp =JU ]W-\W-ZbQaW[Ɇ< ldhʃjoQjjy`ӛ#&QqGph|z\ؕy\6=猻iecuV#{0'\p19QxInc?"^ЯoXN}8o*YBk\ξ|z(mʫoU2NjY,ԧ#X'(/.?ySkrw~p>3O)&K;"q"ڞlD6CNNKm .ѵ7<8]͂7픿xbkUd33m,Y4K9y g_m0;}pZRvY+j6 d>ΑdW.EEiպթm>Nn}er@=B20[J$79WGoQȕ3d6Ꞟr0gف+'ebtM}Xⵔ,TWQBHS^$K; %hPXZY^J}EI傘K:?{qč;M9[*j;)Vn`,ˋ˶}ݜ7ӞJA;FVgqTyQ593 pV/ZeE~EiU^HXWc;F]TVSEc¶T|h.hVN+ؿxoHO?.PǷfrR;{}rk{L߬Z2n=RW: Ha Rp&,x)|YR(>ڤk2ReU&hyTHŠd2&dɢt^I!JrVTynOWNxRhF*\5(xі 'ۉ>Z/dY%]ڀjyٷrsw!Y͐pIJ3R ŒK"O2!GwAf[069꼽t5k+]uib߽E:o%쮱k~Ktfy}뛧/l)H\Чk'2;7Zzשּׁy/QGe/gi";ʧ12.[tCo6i9W$/Q>dF_lwaz)Li.[TNX$'^d^'N&L`Qt6)}4M*|c YJ9GPbMC$ruLjͅ:$CɬjjQJ'ܻI_h2Y-nLmtfo,x7Gf3-6CܱMb>{$zȐ~62>y#8(C,Sy-3n 29i4HwmI_!i.a ػf es-\R,!EIeO S3]_UWU*IԸ0$w<;"s/{bKR!zh˴`"ChR(f S1*x"$GN!uwI*x$$bYDĤA[.JY@GjAo$b+#jY9w|gl6_uD畮Rhۙ>4p؎}Vwwx<56bw`H.Եv*&'cb< 輷/FǺj,VEK)"` ؄J%I֌IP͸<ڭjjoYϧeG^_{E9yǛ)Sï<,"L+jъrjZfj_xQ)P93.!axqOUt/OG8H# "GJ!p !uMf:&R?h"q,n (f$] r`1Hh!J>EsU7L 1'(9K%Չ($a9XvLUvgn-k.s :DELK!ѠVѲ ?Nۖ>]e}ɷJ[BF?t㉟ήߍ~,jBiAob:vQb~v13Y6NeWpYLqraۂԂJG]hDɮMi\\2:9Ћ-:DFC2}/7]w&@Z, UC|P0Z+i أ Hz~U`qGX'HVq@ I+2+* ֽ  B14 վFdNE R܂ \޺lmpyؾ,4=I{DfyT5tk#]?Pt=81oϷ^,E+ eɞZe9D$m1E-M ( d|ytӎ==IcvQn,G6?{oOޅycn~ȑVWw(<>< y7jw;zKx@׬z}[6v-nn<qv(qac6ZTƬF/U!`F2Janÿב1퍵0ݾи9Zsq,A8e3J%Z%Z6hg Eyt)$c$ Y:K[{qQӸ%9hhaP /4A1N*+L2*^AA*z0u3we)i{BsKz@$? ~4ȺG<O(+H7h)*AQ1yMd9rN뀣 դ$R=/zZp`!j{vc0j(,ݠS.3W$ :90Q(N%!jQhUK!)z)w%#ƜtBx7n|ߦ9y7qF7+&"sF-Ç󜧞Ή0%&%S5Eh{oQx(r8hN/+~osu*Sx1[n>bS 1]N.a.eP(3p }{.\o)%7i9 a9ron _}wSE{!yG5[aKe_lݔ|7P^$i7Cq٣Y{}`kzۭwn!m>|P 6쭮|}벊Lk-{9vY"U|7~UNK{Y{]?$`2~.CL %ۯFA~ r$O#4rGeI3\d1s^W^WaZ09ywM4$IKLp'd&%&i '2w;Ùu^@>4%D?7L-޸&{o6ݫFqW\ }QIC] E[Tz]4 `2_SkRnzg8 I@Bx 'W!NZRaJXlLوu"*@324ZY0Ƞ8(ګn[{\??R(%@@ڀ[&Uw:ˬ$$69zq*WE>SYV{GA@>8Ƙ:`h(pFPlA8 c ktTCV%AK Ҕ!,Wm& NV C$pjeՓMv{~wR*J|(I\#э!4)ʵ3RQwC"=C"HEIFqe!(YRL2HDB(LX0.>ULjM!FIB*D&F悶4KOkg6=`+71' 9w %$&A[+')-|*"blf|Hg V(*bBcHA Br&Q xup(9za9%a$q@2n2M UgZ£D[}8&oE=1e}.->|М^ :P'gL[8@$E=ŒRxa2@UL.xSZ"̦wx}-9z3گ!0B9f9/}|Li׶zM?xSMmCy~'XtU0xrF"~OVR:iT=`r\\TXRơ,e9+F9X7(7oޛP~QN/}SD}"׶Ih7vxlk.ۗ~-_zᵵMnX[?W:j']xjVWGrO}7}_%T{A]ys#5B؛N3x×[ke} c0A0UǨt\M`!K4_C'c%v4q`;k"&V:g\؎׷QsCY]o}nt~umc^藏q2Z sChtuG]|LxCmTQ{osnAu,XrWf ]o-^t_*SRuWk:5f.> \YcM&:5k4xj#wLz2$Wj̵]Fwv>.;V3^^LOJ#rln[y}E_~u`s\(@!&eyI,qʈ[ ŐTY&䕳sw+42l-_f\ra!D˅"i8/QE#&DGG oDfO&7 ̑ _N:3se4gpCB:YDf.i2)x 9 tZ]B HG ϒO3O}D3j H 1V#g9#,\t %tJɡ +78qĔ֓Rܞ쵹|C\bdYaerb=GܿZb5U&c *,X{Iu2m7~a\F0"@`udɚ))F)C2ٻ8$W 2;XA@h$Xkl̇@Iq٤,bFTll,+E˲YW^FDFF蘭g}Gp:EEwJ2T;Y3.ldlą (\IA/.Q2^mvIwiM7ڍF?FdzĤ0c@CIhrTI3!Y+cV6Kc] & 0Iu{@A Nq :nP+5Qg툱e'aU (D383㊊ٛQ]MYvڛgX{W$ZeQ*YCa;⒖yG18(q!Y)lEH$ƾ0bgcSFd3"q`; ?Ҝ=4+p;1PjNY"hdA$m k(! fIDpjҔtdyGH*)y2!&̈́NS3bgppc0dllʋc^/x׈ۜ* `vdsgM} DeuJtVǨD&Y"/>/;f|<+&ŗE:m6xW7x?nXIޏRdoG2~؁͂hD6:qXA"S&=;>ulK;G#G GjA(Y Ġ 4qS9z&p.iU9%njc6R[x@F2@M*SdtG] ]y/ɩq hg m>_emZ}wd8 '.u&ʤ-aqj^ 0;B nG^ i;W'JH*z󨲠qa Z)!BmOGu *2|6YZ mm =NY#AC5181\ck[з1,#(}2.lKH{vrj̀5[7~]_n}0i|)1EV\C7,&Tj% Ol@1Τ7IL\ND DLI`x="Yv:ĝ/̥o`2X):ݣ?L=uwXE]kU:,HAwݒi :_;^\NTThx}p:kJ C®jQz=K9ųh֜LZ* K^$af̐K6rKm95ʫnEw No+9%@S+&p]*X5S%{1b(|eÅղ զ W109mAgmp"+dFSaF$!).qh {|%K52@Ȟ?ǣ8~p">T_@G=j@@ a?ޑVStNt/l=Q$9 {@H.a*эa%86!4@ ~XSuj$}\C"M\6^BWLK5eԠpCh/<)(rUS{ ]y։xzJºP]j8Dٝw͍W^ f7_s>mjH5hlQ+ٵ% y{Ia닑+:Xäi`L:xv ga8 7__z2s߿{+Ο.R\E Xhx 7Z4nE܈y7RAroL)Vm\r ϯt|qUs淘p *.^0OgUAQiMU8 7 KY/4/|ݙ& {~to:{ڝd:Y)*&)1K@0Yd5Q2i OiAH鹍 ʫ}48QhR1rT $*S.b&)رLY_w6&6 u1<*;;?mݮ<_U]ynp/341\1Nο< 0i<OvKq(1tAhM)OXg"x g5~./f>@/&!\%]5LnGv5Y[UvinUf_6Kw5M,EZܭf*tFP@龎l@{fW_Q%{ hcKgOhG/yͻs/[3_O ЏE9Zօ!JhI9pV\ˠ 1xͩr)x7zm -$΋Ϛ]zU6Q7@u4s`[1 B6*Zy[g\w]S(׷)EKѯ;Z\r%ŧ wTꏏi\]s(S/UwKuo~'(`w"}صKIʿ]Kms9VtYoYhC%@$s*PbqIII)S*Y- IN2ÇBHی-P*\Kd)ڌiDv@=ƘeZis;s0t}{&˸#6]jZq~2>?CVW Sr)/hA" b.= Z(vpAq#(BZBWTttA3 b K+D+D QNNc [^2p-2}+Di@W;HW%ԔDW U ]!gK~]!Z+NWRP17CW]\ЭU;O=VpٲZ%eJEW]遮6zaؗ]yq<iq}4; 5mp#L+,qS :q='Jנ居TU):' QnQ7etYP&Cq6Z\̀|~zY`HJQjR sXv='QTr:njӶGC!'ճ =nXvZ}Yb$VKQc2:ِaveNHGDEE [t|d6_}(ϲW\VOMp>TFW=uP^k/ޒJ3SiAWfR0 4Km!y:_T+a_fCP4eޙ$C;N% 1iA6JU R&j1`@-W8kӃ&W{q=4#5BQVIZ$uߡvp)EB:TB@JrtAt-/Bb] ]qx<[DCWWR JNWv@=،oXb jY ]!Z{Bo7bARin +lh9 HWVʾ.ҕ&H^]`JH1tpi1th9;]!J1.ҕi)DWؔ;pm1;}3z;(@WHWdBen;J+^D|pdv>KWӭkWU;zK P23ʶ+;]oЏ$*&-%Ɏ}SzS/)Ef?h5zwժ2Jk5 &]yeZt2z롓{]wDKYPF n\fKQ꽊(TT)7vew'Pw=|ũR=z3 P# i̩)CC:P-@BӜiK*zvp(T(jJpJ)0|N wBtt%dDWx=Z~--}+D@W;HWkKC5-De QoJ .3d芮-vh< QN;IWFbK2`ʱ]!\K+@H+D.ҕ [Z ]\F9҅h;]!JfV芒u=%{DuS{Kz~W-nYlְ]DEW cpJiܰM~︂\]FD`6a\\kgݻ+U⊃4¼2g~'wZ2wW% ߇pui\\NJݻ+UF.W!Anwr̂+U+n︂`*F:y\`fP)XD\IlI_'&>+„kk^Kx7h$#3,@0yVT,+ Pk-}eAU&ce᫬,dEDoX#&͂+U{]TxqR_%:7 }ջW}hחJU>wSF&• ;:•ʕiԺsm@]wu vt5q W*̇Tq*@\Eb&• JiAUvﮠ.Wbm -ͳԮrW6q*%*o&• {FfnR(<z[sCF vtkϼkZ>}WT=Ȯ=pҡWB2TpW*kWJ;Te.WL8){ekٿ6Ix7m)y0%Mi ~:vV|KĴLYks~J.әJS֩mv+p\x%YV@]YܙuI2͹3FsêAf{Å g:gU#c8Fv㕞o&g-]YL6kg|Zti;ҵNpH,-\YTm {_ZJ1r,-\҂eak'qNhYw*}:purĆpO+Mfwj-Ẃ 6i"\` rùϓZ\שHnbJFW6qszrp!]@prWPkAUI|q&'ʕi/Ce<]k^fïY7ūwB;#a^$|c>{[>@xY_7ϙϙ)?3 } >Cy>fpO_)w0[(U?_+oˇẤ-ː^F eO %qpt*.P ~*eR};}O_>K;ćߩ~Ug Ի۫7?_rvd$%2JnmQ+x;&aj'GחVb%&δT" )WLnzc.ڜ6M 1v$KM:4ғtg,4(pƱ&ambDDFmP,乷6gFE0dpRm=͋Pa\iE3ZhGKņEZq)9^۷?KKcx_[. Q8)vN1#%IC֟KЂd0k!1!o\vV蘸r5]Ҩ%gbA>ф`v}GBEKCQDGtQo[h alTP039MDJh*rCi1WF1b B8~F|2_7o(ޡu*J}R8>FIc!ssQ鬪M/m>ΙTR35.!r#QP@zs[J%k"?л9;rw'_dqu$~b滌% 1-ŚP*!9I Kqк) ELEOm PE5ÄnbNI-U29;ȄXt=KE Ȏў;udVh o'3Y#eOhbJnIxA-r,-x`fRomZ]p}q(n'ʙKo.C M/& Q 2BskB-Kb1OuSB2fC(nhJˮK`T! b.Ҷ1 [=sEۊR6[ s r"8lk] (&['p Z3#\-"P6 ȆeBGk4Ap%SR,`*(.M껡L8Mj Fk|CPcymF2YKN` m]O N+9O;PcfX57]kig2a[̷^@X#a,JLh_"@S<T*"u؂:g:ppi&OpۦtdU,.qэ56$;6*l`VPS5qY(xS( "Fy[ɰD(2#ڊY!:bwf,u#E2HPbb3,PvNn]ՎG⭰!8Ǝo=%U]Nu~d}k:*'*7mdQie6v~o-wJ$#W`>]b=b2"]R%69(P"QXS@݅Zp#(nAP>!a-m%|)挊a a< LG7X!VvIJ6I~S3LjKpAKqy NA EVBbm,BgSH̴dхjʰ 6 jw7oj,=!MmHaAX qV bTNWh > +,&j jVMW=-5f:WƢXX/DH73ƐoyMms(b#;\?曫۴޼?07heٜ#Gtu[f=W{ԃqWJΦ4QG}Gj,y&#<zef0c7r}G'6idväD{ SηJC6G=T3ʍ[x*=6r5w*2\`%0d[SAOH,)aQmJP=`>-hE^H"u)A:[X 9"Ċ(oV3ZX8}Z-*jKg:JTyĀᗳ^ `bA tA豮1dE-[ /;5}ң4N'_lLzf2Վ _ %O+9rGۮAhJ}7߼ >AwqU\=*m =,PĴk+(ev 'π|12 =2[}_Ȝ?-7ahGIiiktFPJ↑d Zw4EaҲEtYYo#G a83 a1ƺccg!)uYg}~yU!]b ޝU4ݪ*A$I %MaBI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $B@~" ;$LgHUʍs!:!h@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H. TSYK$AC$+igH ֺ'`IS$`ߓI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $N2`\  I*.S]!I*]$@jH! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! tB$нz0/z}qTKM먛Y\(;朦^_f/@|s%p v;΀K `G.R"t ҝL :dlM v\UqҊ,Ң:AsղKUv\Uq銹i nz7J=sTßgFU>v7i_6sR;+j﩯ok um7_5x5,z9z`¥nr8_fE/l2S<`R4F _ '^Lab&"cIfayѸ0o4.X0/Fh+2_Ī eR9)}֟^6 ks |359w ld@XOڿLbpHQ/U{_wj1=+l79Ҝ%IJ$3x^1>2E*M%ӹ\MKF)R[."Ҽhf9Gmefp VJi$Bds-l~d<_,ޣc{Lʾ2g ~;!3 +!:ctLWi;v3 RjS4%Tu\Uq9)chN\I!2WP1WU7}"sU}"+R2͕] A`A[檊mWUVc7WUJ)+-jWtPsUe+J+\U)7 \2 e;d@`ɺc\v\Ui箪 S4W&^*펹ΤAZE>wUdXqIk됹 ],L!CRJ]s9nN[usw}6s]̕Fs3ةڬe?vvnNn LưE5ye q8ZR$.pA,i}"ݳܷdDpŬ//0v85 :?,imȧԶDO3>,z3XS?NjI]ş:zFg8q$R zlV,#CpŋbԵrbBLX`+!cW?ka6F{`qꢨ2~3^z:'\mn>_ůs;a ].ܾvף8*_,#2f<[?{s]_g$#M0 su/e2IɆrֿ-n4MeDFrJ)S3HW.dX=F WqxQ-Jk<+[\Ԕ#%) OMXg`5f@[d)<0Tt ;|BSpG踤[A*:vL:Y9 + +<{$z@F5y\)4plA;"8kb< ܔ3vWm{p{i䮧`8=E~n3 BW8ԞQ>c_QwHwDM\Ti5=vJihM%}'lsR'/yh&(%?P绺e0]],~[o=- "7ӳ36K4+:$w,Lt$xV:4lZQr[-1a V"ovl*!괦ޗ<r3z _rcwmS?ϣі W:G_j|3jr+2WǒLX%U.Xeed9B2PS_rl59$b\p>'Z +" e9,0gkhцvXk<k&wZ;{[Vua5H)|o3~M]>J(F߬ݝ'/#r)?p4%AW7Bօ(fǯTЦ؁ZJ=Ĩtq#9uh#X(-ԦyTmL dc'o靴j2YqOOY^>ݹ9&g^R+M]ہ>Y;g]rX1xKI0hhDmdYSd&{c1y泐f0"mɞùLH%RYm?g=dzgZgOY==W`dy:ԺBwuߜ J*Q޺J(;}P3!|> Wj!E:kDF xpj ,.Lt){~+BW\zͭ1Q⥄Ђƭ 129;Xع0s֪pKm圂d [!8&efД)L|Vx6 &3 }d-Tm@bM'Â5$PMhe*,領S6$jj2?\ΞYUmPO)v0շz?_7jmhb|5M޵ca`a[ ɗl %1H-Iɫ _<(1$M5=+TUzZ.N -Py]s{ןũ/|t%*u!*6Qsa\dC_ߡIT<ƐM~F͗}-=90+ mϐM55Ʃ?$G:fӌ8 V 1FAhxH4[M _ Iӎo_Gvh!_Ϫ Hjo٠|qônՀ Zݹ laB$|N]CL8GihETMٓ6VP]b|j4w1ooy\ŋWvd{}zC{_i{no"{%d/ l?>N0GW8~v{ݙyV~ȞwnyG_Zo.1q8WxE{7zK 5j'88u܏ϱqhz`3n ƀ➶u]F..u !`J2J+aÿב19u}~}Rzc'?5>x-FfQP䂰T+b4Vm DbRH9.PwƳʋvڶME7n;j8Cp%^Lo9WzytkE_ x XG*)0Z85  2/l<*Q1bY_(q@}qRYaeQ' W9f+ /蟹4^s˟zHޟ ze݁#v)t*% IPQ:KWbփ(P.:҅&GP1 ~Tj2'uOUUqšK=I\(mAP9cLpi1E\e:4J}1d}-wE?J vYe7u B|'^*ijN'ڸ/\sbp+kh ZDS8ćZF U/ RFTjT㟯 / X2AiA'f˫Em>k*Þ&? eRpzS<ƽd*R$|@[sr8k\׷kv7:*^\~}1M6Cva2uM]RfgguNx%>RJ~2߱=1j V!gNysELNxw力||<*^]{3F83q2]a,fZ(19&=~)>w:U6LD ?9OӾnLӓ >UͲ6g(I@ޡQofB2 !#.iBmқܙҍ"n>Bd63~B2e4/U I:#"OH D1,1I8M8ک+2tZθ2-2[>rA\8x]3Ju1/e9ĔoŁh.s .⼲YF<0NB9dpj@-(JU+JWTOvA .=R-g$hDX! !)`pEA(+4 LNy.g ipT/]S9Ah֡ SkApm-4!=LWWohcKs$@xW=[l)OힺIEq=^>*YjgCKy!$4peȩ:*ɢ c%0rvDF#q&!X(KDP(ARhLR ^-&g@pՌ34)2rF亴$e&jm ϒ4"6J X^XΊr fw `g}D1D RK,JR@P8h1PQeQ!z"p=$OVQ_)M 4e,*(˝'Z3rmSJ 1'1F9d-V1W磝4Ikh %,K6&lDĈ:Rɂ`CbOsdTEǠā[{O^0P4&(!J@ 6-* NAycBK;i,9%$V9?r(WE>SX|=# DiFDi 61樱!Xe" Q!T"Elq:G9Xu!HG19T:la_4]sg(vioJ $rI0P Z&t%Ltx]GUwN5XV$)m"U,TXDH3!Xx$T2-}v9Gi}!wƵf=+XVv{ gcNoG(I9?_$^C5mGW/#z!%48ꩲRRIT2| DN a6 a6h.Nr?N19MdbArX:ojh E8ๅH2:ZUI$%x5Ĝô҄IkDD ]Cq\EƲ-;●3![7Ho#|)b5l+}/CU@y[Ư)M,|G<=,|ULƋITQ q\(jF~῞FRǥ8=lǸB)Sc}RzͼCQy7ߴ Ԟ/x1 m|cSׯQn<պ+~wC־I3OWYwCug'ifu-o0?L7W$/ϹVo'ȍⶺb~zy1iOnKImi C:6qTTK੍1h:>^ėkؘ+J{ Cx//qs 2͊OO},]5X>f+PkxHdY^)#ne#8$CL(ǏkW{w xY0{gK1@;z\^bX&L[B.wۘxJ^{ X2U4*!l ђ;e Ez(|R~O\J Q&,LH58^WPʦ|kwJ&fq&D'(ws )K$)PS`9XT.W=m*{Hz~4 uTepC*t(5)F\Қ dR̃*,0Aj.g6 -6Q2KC+yp($qT#FPd@R9{d杠j4.rV,nSO([8N)7i*i[Wl~P'WFo vXvWST+w&c *X{Iu2n<`\F0"@`udɚBrhIE)C2RdC;U(*Қ95c9RL㌃t!%RXQպ_*#$ɺLүΗ/Pt5mK4)e )2$e Y0Q'4c6 0ce6W)4A'툱)c'aUU6KK[c,lP;ڴ>ͨJ,h4r[,r pF r334 sAJ$N-dDTXI x"3^!pD$)c"ka}X6껎HY1E#C5"+٨G@ygMqMR_vPJjG1 P &!Ue%(>HDpjҔ8 UTThGK ASړ5b1rvkOg?CzYKՋ^^6E$976{ ud"“.xR.EtR ǠD"I!BQ/C/EC,a*lG){FUx8t#q 1-雥1QSCu.YxG`DPࢊWTq,wxǻ~A5wwwA!%` }vqIQܭX)W۵=# ,c[}]Θپ6UӅBIvC2t7}:{^W]QO:*T [\nx5Exoj{ѽXnmѶ6L1u=;VWtIh}jo\h^ePH7}Ӽ]g'EGDoC[EU`݊;m^;mAͤumch|0>D&+BEhQǾrUgæk54:R۱k;+^!]B"nWz`,!V5I-:V:ڊTiC:"nx>n<4SW;湭)rrrg;2dѱ)}OOzw('OO4?N[wl3G P"::nףV:awѬxgř`< 4sx㝵6jN% dg UqQB(? ",mvԼFFO߼ԝ^2>u _m?Ϫkg}B SOug ሡھn~˛L@PxBht?.[O&.[~zM7F2m5v'? ?gaȗlaNߏN7oן7agqw-(L&=t4fipv hl})]9 gz]}gWVsw᷾Zu~xzvy4QefGϚD:z?ˮ 3?YtKNg?zj}w}jS[{|_>uEsޏ.Qf?]iѪٽr 3M$Haga1uCEQm효*BSǬ=v+m{m 6[UblV|z'7C]S1mZнѝ*C\_{K! K_k moyttVy.[F~Wdu_BROtʲq8 JQIKQLk0L `J^t(IbpSRtE w]1) n=(DW!1b\ic]WDt,%(/FW]1 )]MPWAxw"w]1.)bZ})C H6t(GW+gi]WLLW+`88v#* 츺J~]%QPy']ݺCNuuyt~f٪:m߷ T:Mݭ.Z86vFmڊu:_''w)A0KpUoNV-zA1r{MP&\DϱPMGOllrR6+f6+mim;{WרaޢWsоWjtlmQwkQ678h`~yq~ߩ\QG*5qw]Mng~i.d8ӾsotX\^vczUtX7'PQfff[ɱ3sS h;U)T-e;XyC:R#s3>Mbxw<ý+7|rlgx:0s3KKeg엟}38\R-S=;2KK4 X^mw_4Ō,0ثbh}d(7𗑅,`|ꊁы(EWDJ+ԡj weW6"H{WL|u!:4tE^43Z1bZ)w좫 5 [ӻb fhAuŔeU$u"QcW AŌ]1Xk(s{Qtcpb(gpӻbZ}0Ȕ]MQWQiAF#gqm+ڙ颫FW $`x* ׏DW-*5(KW!AWU銁+Eiڍ(cu, i1?ε_iTqi(!/ 3XM.5,)fڇOfJEԴՠ$]1btŸAKj) ]MPWtNΘcڐ2]MQWcIB;x_b\#FWL6w]1 QҌ(?F:+ O1eTej[/iFWvs(QurJ@Qg+ euz/HWlp+ُ]1%ꢫFW e9Jc/M #*֨tF3 EWqgS6YJvv=WDb03I &SI<0#\;ђ DŽ:Iv&Ku JTqyR˙gI٨NOԬ,< }ZUCCj?Q'gvϮM Oq}aӷVrTvj+]k;U5NM3ڻ+7 mI_>O/z@lˋG]=Vo<;;KռG/9?5Y7'~z~C˹~'7m~8iB9w\哏oS4CNqmFw{+ftpRRFwaGw4ߟ1㇟aa` Fexdχۀq]5M~pNF\fܖ8M?-8%* 1dܗ-EWK2Sҫ1Aj#FWk@bbJWzWSzW QNp)bZuŔ>]MPWQG-FW`iuS Litu1(tEZ ׊ӺuŔwA"`prtŸQ}bJ(cWSUt;˒XQSIWD: LiW+zغJnqø3IAq6K4y u]ݦ5ݼY.KtKc1(ڢݷ<6^shqui>wM3%ڢ jtZ9J9w"-uŔ;*]MGW{4`]Wb`&w]1e]MRWWNX7(+EWL;jDJ&+ `銁`pQowsQ++FLkS NQW>$- rtŸHӆg]MPW!r X9b\+wŴ 2-j:!'FWZLi!AtPtJXw,; 3L@VPJԂY!ueds%xq bIF!%HZ͢fId3X*YN.ZX2~e-נ?0ޛ6׎G"CbQBlbbߺ!hqf H}f;< mт4cgIO A]LFEӏi滚_rKlJ޻reA&`hq( vAL5͔hz&Y I%[qIz2]MQW0+1+}蜪wS_t5A]`%+Aw5E]9 Jb`oq]W6w]1env]=*A"` Ze߻bһOHG+FW V?vŔQ]MQWbAI +µFQ NBWPS|bVW-S+cfcvt|%&w +*ڰQyuټ8{_|iQ{~ܝPA(nY÷ek+GJg1t鐡Ϛ겪B4W_:(uR.@]DGG Eb (@h7:{n8s&m?}gv84SR̓7,v >_3U"DZvSc-fI[*^&<6 <'**8k~ߛo-Xv n˻]}JY{uy4x5S_w񝮺ōzll #PQź\}`~ٽ1,3eqU>bl?<4Nyd^}$CSH_Ǿ[͟AZ/ U`oCؒCIhz#LMrdgTnlw!+ G!ɗfIglPvW}B{ߞ!mprr'*JEFngU]}0˖Fxg6z燚}9eЌ1{R*ƨTzsե ~wK屟07dAZRǕyƕZٷ* bD(ogN]e,Zp̉1 z-!)-ַZCB[%kdDbnq5NWJ=hR,Z m01~h!O.N!jwkK8߭"R2Ylm#NM#$JˠSm Tg1]kGfhnf(:&.\T rz7wT49~_TZ9dmbJQqa7~BX#őYnj*`ɘu!g5} ԜY4VU!ZtM%%jh%%QlnbP9\VTOpר}bXK$1)P;R3B5~Xko&FXK}KU.3=@=%Bb !cM>QUD̢}RL`U} T0UW)6QO )ܔ 3HTg*H #KF ՞w_Kѷ#1i(R]`rTSn yCh Tf%EB$Y.kxA^ C=Vczh2& །wA9/vhQ8ΘuhNo8*WeH;,'"!pB6}ofC~0񍩜\7|5x_^y*ՂUe!0 ƛAy@y*}vd &rt%C2 Re1'84;ZŋΨ ՜ hdUF Ysk( 22ߢvAVe"H6}+>CKpϣlmՎ6<cs>X4L˴Ӷ\7IyՀ ԭwH7ۭ6@'eJec`ӿNd6YQۆaMEk)Dҽ$DhY=yh4vMfcdrt0ٰMʌؓ8E yآsE7* v#bns ͢]fMA;h Cʘ@%fti B[qoٮ7̰l+TOo/Ow`E_h"6J1N 4$\sFE_$oU^1 b^8p xsc 5H,ܽ°];p`\SMFe]*cjdx hNm;c VnV<H h׃*TJm\3ikXQ6TK1.d@&'O# /{*uz>@M›ւ +w6U dmAy EX; ZfSa-lj`IBj{4CS3f=>\ѢScL4'a903t!/ޠp \8 zul)ʝKCPˡIs)%8 1i츅MWKj$tip/ P #x6- R GVz/n>;\nxp2XI"$ׄ)A[L5[ξœ~*4y@eiJ|C&/[oN6wH Qh~7c[RNͷΏכ@=ZdU%\#p |(IX~IRI$.$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I I$`' ` /> ZK5&t%I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$7 @Np)Lh}I I$ A{IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$^o!%<%3C;H5&v$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@z=IoVwn7?㨷 ]Z}rvqyB$C .E,u8%`Kqz%AK۷ }]/6'cލNjNk>i[F;E!ˋub8V ~3~7*츬]}|h7x6O͓&`0볓vu XWߚ. ` 63;'7>O'XnjB9#g5Qi|6O?9.q?Jijg|S%WK{vDe'x9doYM=2@tjƎf6]bfY75>qQqS41(}~:K+j`yPl&k[32-u$-Hب:Eoy*!d['hݏ-;ap{^0b?%8",}Ŷb;dX8Vkx45{ˍۓG.u9azҼRȚ#eVǿOV &g;MV*‰ V;Տ>n6iN M M#~4=P8.4zhڌ}jCW[P hNWvBWXyc惡F] .t(zteU A,EWn;z]YR귾jdQWև*w0t5ꃹ0вt5P޸$tzʓN{ٟ0V ] ū+$e^!];$`Mc5t(tFNW\|tq|H`ӋWW R:trzVi>;]= p|n3 90晶> }aW ]=vIyo<5q;Y7w?_.1M7}{^V6 Iqq\77K)ٻ6cW2Rn dzI 5EjIʶÇH#RTӢvi%ΰf*`liAJ/Y(f0xdI*~!ݿ/<*5V~)z{崞KrE-~U=,wuh~:&]" ke޴ UZ^[Iݒftq~ne6FB#s9ʶEP 8dk]h3`4*+P)yW+lw ,NQ2.3UR fw1%4 vП7~f8$,rK[g2 n]~ e;_ )g&˧\{ +&.ʉ˥M yǰn(,|;oAyK[ 6rDI$P\%o6 Kȍl=w*-({= E8Cy+Oƺb?):NJ%,B J^;`6*"AN뜍$RgQr_+4G2/}N N[ZISZGEFH܇Rk hw2bu_a[ǐB:A$,$TpBG,mYR(ʪl.3~y^s؟\>lFt[.3W1ͷ/E8gkH\& C^G~s*ba>aE%9/\~ۛe4w HR7{?`fipף>9ӱdDI^&xiR]%/,L&xg)93f~m3 <<$0,|M:,.1,nӻAwEp8mSnpv6 0OT]։wOZ/tqq>>_B9&5`rv\壙3Q*ލ8qI`Ty6R귽gݛ^RgҏO+ߵ^L/^f_ C0Wsw+ݻ] GnM|I1lncOJo骫ݍ~UeyCF 8`y%ŋٿOz8p=!zzVJL7c\,g_F6 {+n㜟^yʶ~ѨS5jH'1k?]%7}[~?w} _{+870DAK/,OOۧ/v횢5Z:CR| /m|~a>Zے~Zր|!|ӟ}|ztu=yFI.j G`IqL1dWdiD+_KU*\.JWк'ޖҏtvMlضP&˴IV[ VŃhM<#r$AqwΑ8tV>`,_YVhrUu؉#}ɆkZ'k.|~75X{vתμm"nw녏\.F%X)ųh0^JXIq KTbOk:Ut&8~[6c<yAuy{lOYٙY5᠕;?2:.});-(/mAoc#ګ0uuкQ0,m;̎U߶\OóK=߸ѻlv>7Sc.]h۹mݏ^ӿˀ2}O/{76nhwn@#xjvmS!!H8OLEpyd<,#Z%)z9T΀ב[J! `1PrWKZW(ާKt_ ("[6 w?ODSd4mw ,)wǒBƅ5]Xܹ"uNnzjv+aHiDhЛ`V>]'7Imn ߖf6auz&% /Hi 2@׸Sc9$'xs*ϩ<@^ʞo@'Be?d1sls"ʗ8s4Zs.AouY\ Z,ͨX>T]@ԫR$}@`YhϪTF)KN5/#/O"܈S%3(̥ʆZM톊&fLMIc9.TMO<.Jj NǷގZ _|F1wiuږ\hP͍mRjz)YLtLJk Is3K)v.tJ2 ޢޱ8r$RYrA 9m!(}Y`sB&'U3VggUj/$*BKʳMo 2>,b9⑷'LNt0~0> oc+H-RjT6 KX6kgMe@I-#иԜDmMFE 6c炝79!jR%vGCX⵫iǾ^N^`;4>Kd_!-'b`!:RڃE Iʄ^kZBU\C&dhJ| KFppĒY)쵲?&vؒk,X\=sHxQZEۃ1&DVg~ L"An.Kx3d}gc19 ]JĤ2X >Y#VgGt4ԑ⤙1uVӒ}/IQ)/3U Hۉ9ly Q'-&q2ˊ:!jڱ?ԕ-όonkr;U#w*Qt/ʶ6v?>GU؆fz|;KB3FKۀS(CT J r<;~y٣Ln#?(;rOC 4Fڵ2)5)ɔ! m9qIop@B;s& q|ɜ #4WLPDe`F8;YcÚ; W^:ՇQPB9Յ5 &czusnabNxVxuqج&[ӛW"P kRW+*YSfS71XzH8SQ= GdqYesg&[ʹ 3L({S)ò@ ,3+ 9qgFC<_бtqx/Xm΋I30N-hmlKV&bdl`q\[ sAbI6`::jܪ,MP3u9K!Iatd#T !I!NWMFD] [sT0b+23ߴ?nn.4Smcuu7r]#Fwdy!]R`e!{h7jf՟J[ڔj,ϡ)鵛̃}覭 <}^4Ǔy/YgjIeө xfRV:P>9H& ,*pTڀ3-0R`xAxfSꓪwijv@9E^MM Ԭtt NH#Yv lr(*yAhdU jVӉ_v zPt=NVinJK2r!ZsJG!&8j 'ՑɦAOz' )mDt$<=Xg$"@"iW.E-uFTfb m-\RY5%IvԻhI2KH1W<ڧŗd)Gd),(d`[#89dE,"WZ͐WuD2Q[,&urU .XBc&,hb.Ĺ(LG zm^o5rojƎψ Tg!$ IzP z` ̢响1 '>(m. m-7Oi:Px#WX rJmj~?QCڒOƇ2n"DrL}J{5qKh?POVC;!3WL)Ù BE:Gݨld!QUgs'?{W6?&.UMmͤv*d*|ɬﲸH Iٚ=@@*24>~ݯ\b<0A\FlutsC>9z9'[mVQػ 7S0%mv%~Q ˱Ҭ$TKrP$U*)F{wN zsVoi}ЌK"6#OBۥ7XEC^=o# eUd j}V?,>jmT-N~iE-VLt<[(S@ /WLLj &]'o+j"r:&q5ud~"SS@n~r軷3U^xiQWh{.݃ וw ܿ mbȮ3X|n ӵ~]|[cx}x?}W)O+HpZ>Ml3A=tC=X9U+Cy 2 PH Cm[˃h* ܅PJ@HB6NR1ID=Zz#gO9kW f N>lcT:|,B dz6)`eor(I֑H$ ItYsfS#U{g3i|F<ͅHQIwΖ^A\n\Y,ғIT ;' O',~LWO Ndet%9$͊~hrD=G/㨺z/ F{Tdm2i3d\$˭27:䨃Q'r%c=2{mWT:zѳcXg{en!iBRz!yA`U@Z".џxX&E:z㐇JY:od8-%u5nDIGY&$ &ϔlb- ٤Ii{ӓ=$@Z+kԖX{4ޢbHEy 3ĩp [ 3Їњ"iR;L<20M03SVQY ؄8I0e_ߕ6}uӇ^pQo$WD YRܰD) eI@lv(PZA e@x9I˘fγThc% &z7rT@?{`g#*2Ma8#{bȞ z$++Hr_u98 ra9Fvhl{e"dR:8= $"q͚LXJ[@JE0^_Ϊֆ[PAmt^ ͬ-!0<9ɸR`qGajgF=|[+ʮ g=^ѤEx"$3 `zQ4a[ÍujgHUoR=80۬FeYBpڣ5<@=@SƟ3 =Ag-wv)<ҚL9/%4@,,Z!S8H֤^%88Ns" 9& hS-LJg 1BI>7rvK8t2<ձ+__Otx>ETn+@9>YS˿bANJK5!GZpֆ@:MS (]g &a،X raLJ2s9I{uVYXf-͊'mc@H\d39fa%:uس9{&mݲ 6&!po4gefVx3+9ʒZlܟLW'vUw%#cU?qJqT3I ybmg-}&fӰ4PNҹ+m'Nqj6@ S\>0EmQ2yE]\d$XzeLa)6ڡ'؈ ϴ1`'Ĭ;uz!168Lm=$!LwJnS޸uV8*%=N$ƈ 8}ǍFc1%mY(23CB!tL8lr: Erv~qԤ7ޥ/s=OF>Y4 A[tNF\H:k< X Bjo`>hCN J;p*10atA꒳t(@f!F0)ȣ`΁hT. x EF썜݊a?& HwNCy^ɇ="tbw<l/3K(>}dcan52)+yvI4O?u ?5;5Njz9D_7.'1j6M$@Q;*RߛeFXoB E=M2g"{K}(".^e!hk :&r%W,dÒF8Z^T@_R57i"7;aSaq, t]oc=ɹVv6e3mҒ~{&n^?쓮Ό)c2r>@^YW.}Qdrv`eihn&HJv+ LR[k,x@{UQ:``8yϪ)w l>Cq--g}8nx7l1,3& ONac #3>F]lLxK/a1=c1IFWx{)( N1k$]gG_i,vdȗ 0osRwcRێnŊ;Y-?Edʯ]-:mN{I oa2TO68>g+?a\Xf4x7TEO/CL*71Lehc Td銁]1^lJJv9M'cJp]Ԣ++LjhVO+&3hC-bZLbq*!)J2| jt%iѕ"+5Q<ѕ࢚В+]WLT]}7;V}<6>L<ܩ<4`0,KW1CWU(tlrU{//~[YJtk7"W{]kҟo/O=͐3u1@ˏg룽uzuq-?·_j6mIXwԼ߇{}`iZy^K8l[\qމN3?.ޟsoN%x(WzYkt)V2AQL~%PWo\{qvrޮ{rk33Kk{UgV+JM4`izĶK٫::Bj+Hfea@gYXe\jj-Jw5"M_6Rh׍:h}6/Cъqf"_/J~5'WB 2\`a?^|;p3XX0.cL#hj3̅<ԂFejAhDr:0XWW ƩѕڤEWB t] %1('jt%!jѕR,]WLIf+@R+FGW@6-]WLIUWrl8:=b@6?w%Tu5G]d7t%qQ3w%+j LEb` ^7] mAtNQW)) 20)z3ZtŴд)]j FT+֪ѕ:5[fP(Uڱӱ!qYNL4Of06L4̣ SRcޢq wN 0P@iRt&4z4Z4ʹPj@Q8*7ZҢ+t] h _|t%t yJW]ƚu%f+4h銁pբ+h(L|tL$gJ] nR+uL2U]PWr+FW]Ek)^WBi`pJ֧iiy``+]WBj"Iu,d\tZt% d15~tE;V=gLW0N=E3qtG&U/l!e芪]bjl YIN4:!|}-gXYRdMgcmEgeɨD4:cV,Y(Zzҗ޷Xf-SGb %:ĞۢFVԡ[88)F 9 !)4GghZpAL_rPjz>N,JS+V6aQ {-GꊁYp] -] %R uj 20y=sv \BK棫#MAOOEWBg~{vƍF`Phm,]WB9~o"] 0ꉮWM| uE՞Ĺ3qɅhL]eRΛulգ R_uWf &M; cfh ˮ vmKCn٦v[4]`i `6PXSa=ʴ]h߹n{~;U]I;ߎ.> 찹8;wWջ/.>G۽e>O/zs?ԏ_tw^޿d5_~p|l41mm}\9D巣{L̋F]~cQPS K2qcS -Q=%S& aOiS20DcK&xܶ$iZ)ѴCx+k?OOޭJ%s>\{ys{E0o6i?VY3 tj+k651pncM(re.8y| Ho/,~ϯMW;snoצmeOdX [ں2\\]ak zco~zJaM (N ެٶzk\~#nwǣlwwHO|,xa`>kFۋُ~9~Ye3sz#}\~xxwyߠMd L> \h)=4TC9|kꧏoyǛKf~Yʒm/ͧMo6oMz&nܲ"P>qnV0rwx[}&YŲU]<0sǕLnWwwkzYjhW)OԼ2؃\B?kC~G\#҉Z݆ieې;6R:hן_s{q&&\o϶0~{6wҩ !( egm7]ӆdj-n첀96xdk'G&QBa]Ցݣ1jWoI{po˹js\o2?n>|` &>>tɘ<{Hc.Q9OKoO/DO]ۜayd`ɀ|'TF=TΛ0w.C$:>(x<Z&MI( k4è"cK O0@:0[[c۽aBHgM3jL J4Sr0T5=CM{EdjMZtŴ?'6T]PWeY"]1pFWiT (Gg+T]GW!S+Ϊѕz5KוPFɢ"] p5\rZtŴd UW3U4UalCQS* -\OZ4,X W(Cִ0EIO 6hѕB\ uNn"4d`z+lFQ++[w|?;V?6h N3x* ךyn"]Q>CWUh|ʧӳa}q՞ˋo_tk$?|6gx8_=a&5Rur="y>lSUWw/IXs6\z#nlno("4]JL^LJ>l%@լ_6~@K_7sqbZ{#Z YYX7j$~-;p˸J{8d{7qwXW^NjT]VY(5=FӀue/v "wQA[YT3\ 6@ClLg8Ķ5E:Yk≱;$֝aϚ(~_M30L:M3-YW@~M q"]1xt] SU -u%x̃JIOtŸY׽t%S[GYu5K]aJW .]aHfjyj J:FWլcdLI{UWU@!)Uh-ѕѕ:_DUW3U QӛAGW] -+uEƨ 'GW F6Ү+رؤiY4K*7Mhi=(/l0VTg .=ѹ(ͷ-(7H hhZpj4WB 5ͣ [,`g@שѕЂ+]WBl u)<ӕh<ѕ"jѕ&Sb u%YR+NN75svB[`P(FWsWpT]1pB=AALJUW3U4\RhQ4w%.jѕ:XsUDM[Eo7:-ZuŔV]QWdFM إFW띚Ж] e#]U&U09hD (K+ªGW}Jу&]6+MNvwYɄYB4ZIe+(O*`|R*S %JdʪdLMF%kj̈UW2K'{`ѽ+JNWğ/.>+knI鞝>۳q $Io$E ,DptͫY_U^ݻng_Ê'2gqgz Z?MZ"qVc!ºqP~=82N+B _DT^j=0X)y}(< y?"q}h,,ye~·m f|b"jui*y [#iM*t16diDMv_PI+8eҋ\F۝dQUdH\} 4T w/tɀ3-y;IeQP<)FA΅Zw;d:[ mJM>%E|oFvnѶ@ kpǝ"Qn8K}DSSVt#g JȍZqjGϤ:-uzo~m}vG*iH\CNri,49Ӯ_Rӓ9 䘍ӧyŌ2jxeddS[PW`pTKs@U%Kd[=Hf6bmg&)~4q5<9./iڞm@UZLuG؊P~26CMLk枌[ YtILq^8v%/%a0K d8e1I\g瑵4#TZ9Bp,IT+urpR88LE ŒF7]41joӾ< Û؊8`X8%ERfi3Tei3ه/p h}E2 *+Pfv\l4GXצkwlS1%QJ̜Rc51 4* ޣjA>kLSv>Bzt6S6p6pE8T̡jQviQ:EX=KT̙ji3Pmi):*Ziq|U80TY?ܘԟ:o@6O_;r.MF(dC[Dw{Mr#*]Se}hgb>дڤױ`TLJ| yԭcy9d-dox߆`v^|焒PmNFљl# W7@0#\/k+,M 8[%{4EyoG$ Ө)A.<{]ܭVR$ 4└JyU<DC~~ is&Cb oV#Ga1 x^̔ʾgg9p8 sSt@ +t4j)ge~F̐?Ĩ0Q&3D|DJe"ӡ|n.;~+Ihu2<^RFkۿ &؋dQjW^tf(lkʖ ?1{ɢ\?C/#_'I:<|a;,vyaR˃Őtkyt?~\NFNx=J;lquEq0#|8+MО4fxou@ˆ@|*ܧT=K Oa]Dj\ۻ(O0GҶ5oF ]Ab N݋HapyM1l7Ϭ.k]pho0/0 Snӯc8ÞP,r)y|zp=^ԗ@CƂL v<S`|$ 挺(8_ݍӭ/_̏)"i#Ɠ7/Y#pD8Nl34WW0;LЦw8{Z}fP28V><("Eۆ,BKo]c2Юķ2DIMPWyrV~pQ7^ Z,N}| J軚/]+ŷyc jޘX `$8RZ)/_?TbfDHoK)/P|u<0#I&J_ 9"ܢ\zHO!!Ujn/]/  $F5 'J% kt_NYS 1̙d/kq} R˜$A_ g$aD(b )gմIzy=Mlv:K'o$&V(^Fwϴ3w:N sЦBox֡IWE٫O :/*+sVR ^\Е@UXD2,@&JꈘBe;I%d $bE,Mbfjѡ8aHqs&'1>s0y5&r%뉌RXī<^?`0..TdM΋?6tr}{H/Yiר;Hw.vS~ YodY?}^M gףx2g&Mz 3! T  RX# YJ-K2Оg\SICAүrZ\uƼ9B9)HXq!%A< ,("ZeV s]"v)S Bgξc#B׽ yα~^Tl0Pt7Mѿ]o,/IY9&jjtl9M:_Mhc6Le)R(~ EqfӘ:K}3]Sg-Zw0SGqDHcFVj.y5퀔xrCH1w9ͤ0yF9loQ{$"\tB#YHJ";<)a$ڡXVc} r! ږ]f.ԆH+DmI LʒX`BejAk.qy2]F3 hKRB+b+N*l3Z]XK82'@bӉx'PE *^5Cꂊ*E5_6dX6_Mo*hC]}1.g’5}CҔgHJ3YQCpnqxӛH}@b<6`3`Y3%E -)}[&8iй[%Gk7'jm l/}vnxmC#ոU_M*U\"{bB6|Ɖ;f¼P~wܐT5L$ I;i&QdTKscǭؖ#wokڤ*9cW";mz[mC$JW ٕJÿ'r9% mA h5n0uE #ۃo?'d  M}ueNAxa.圼s=vwgi~9l*D\<OP Mʗaֵ&yYiEN(epxȬ'Z7,rݙi<@Tlۂ' ( Gy;Ը\52[־x1ޜ `)8p<_F+x1Lyq+$SaDVlCZO0Іps7)cOpbG|~PaFi iW<6TTÝtiX1!`_x*N5 NW $$1޵#8؝؝B$$; mK6eJ19'a*Jh|/E$LvIXr͞ղhLE) bV'`4m@[ +wE3fa -L5J!G/j5QjJ7x<|05c?&xK# KdkZ VKBZ~]"g}%l+篅*e9Ni:>$zUU2)ٰj@Q~@hbv%IM`ZxiUNv@n||X%RF  9)aYp잫m&m"l?0!٦}'ۀ՛6E~:Tf +9&_"+ hT џAW8|s[uزh1q7s%S4#&[T/(lļD7߿.}ՙ^E_^+vFc3e2i#^%V05=.J '8zVܴ %b-<;3zKÔLmjnBIU0=/%~9τ!8cRE[v0MEoOj1\}9(a2c$)zeTyڕKN!uS ( h6?%68w慗<0R}JD#%Nrk҈.-rNȀ^g\-6 K.9a@ܭ[:eшfWG&,]F!Ut@NFk<$ @ 㛸Qɧ`H byU# X7zZ\&kPoeZrxBx *IL˶.5-*whג Ⴣ>=ǚ#Jt|MRr>x;s*9z$ yH+9]?Nt'$n : ޭ0' P V([90p|FplGST*&Ğ/kk౤Nx)Ni*ڿsEHǺ(wr6zNIQR'dQp/NVMKU?7ySѝaL~#H2՗#2M'vVPd1Tz":S,5s=s>[ hWېh6i%jYTX`n;gƹNO"@-P|qS\*+j #GP_q~128D)`* H|)an$õUE sAx^M gB{ReƜ._Z=WaZ8 I L*vԘs̠ȶd@mFd=^֤S <#ci>0"qP|PqTAhЅ|{]3$kS3? D*uݺvStDl5;>Z3<,?Voj1GBF W$,Ƙ#$[)-Aw…]ʖ-ǭ7 xpn?*wT5E 8-ה\EZh:JEნ=Kո ete]˫/լU]J2;Ux둮\oV ̃uNRGr$r 'A-7 Pdl_j%#Bp⪊'j UDB1)@iDy,g&{We\lp|t5+Ŏ* W _j@Ѹഫ%޳4F#Kns3xi! ; +R}!A   [x@C܀1eSyD=)ε~4A{gUϱU'6uLʔXWup QV`BeA[˾`Epj fPhdGn0/7MEủ oC:ft)wս03(6 #;ͪ_v[w{pc vwIF[ tYsX9e;kO8F$LQLQDfnnΏdZh|,x?VTsA`h ́4T9Լnei$ls(# "[:]W&`>. F &BI'*J ?XoB7C=HB+:߬K  b)5<0 ^ pP9A %(LBͭ +B(1VUHӷ:2^%crKmiS\]uze5ИP 0^j{x&H9p`ű$@wp:]a$_:8' =Ā56oh|[/.A 69`gy]6wQrey\ӌ*B4On0aFEME&1o-7R0jգLC$qں226(c=T `aQ"՜F[ܢsuPbG$R1#t&^#^t NleTe%1orS[#d[vt8k'oMiMxĬX=wO-V&Cs'kkNXad]++[̣ym 5L;8B#[P;s׻PNۤ&*>%fd@ټp]٧jc~_Ӽ4;<8GV$iD Q؈03.~9v$ RH0v.Π;XZVob+z!S`ZƵ1U)YIcB0`r6PK\[],j{҅ I~"8廲YPP-PBc]MKA i T=1İR^hVX Ztʵ+|b_6G1mpfh9古K`*g@T |'|-đXcݼAJ Tj(}Y#iLzꃠpW] 6o1\3AХYb-4'F5VdFN}ӡߤhI$>?l萈` 1tzUFs iU )d࿈ P>L3(̂JٚC@sC cv1~zz|tJ.I 0`Q9LgD!My@RrfҲ@,Kֻ,͍K"ۘM=1kFI9}x]>iR.ak 3fw#l@:ԡDRRǷ+1ȇ4;TQ:me,5r;(,ed.w.žY-J; O=$&8*k[S@'vQrƒ*Ϗ(L-BL(8h~V>:64>w-İp =Բϡݦj%]@nΈnym]9a-4`Xt6jmOqL +'QY={NU1>vE|a6 Y2땵И;|FjSC!<2y շb!M?)cldlT{(bȪnpT3 }YH/$0IfGmU]9E{˵2A:mK?Mfwup[hӬ)TF2E/zPETĹPHizC)v5U!GmC?j)ZH9,B>2r6w sgmoi{oJ;A0t86On4 Lk?"qB=Y]? =3~*2ɰ8#kw?y |M&(kO<T@ }pt bQZ384ۥÐa f !IzaNF/:f?e'~iG+u֢JFViWKb>Fô,rNȀ^g\ 9Ⱥj'TƯ1%eh}+$-clD,¦J,Vr_Yqէ!]8{Nq&ajnApӛhQ%ƪ*4~M6ȫZT"9TtFZ.ŐAecH=?!S >d= QM:Xwtz,yN93IcF txDպ> ]ډzC]7US )%9I-4&VĈ!#Ac~S $ 7V]B*o9XYS^v`݂߸_TҊཱB %M(>P\Gn,Ho= 40|P=uD=5k1HnvGŜG8Ap"᳟`Iܠ„Ladcq>ky!;!e!И0kE8*YsH%9|sT#+s(:e=ȉR+`~!Xv{>F)_:s-4v"ƹ3ey_[1riel#!Q|FRM傏hʿ5£[}jT\゛LuR LZ " 5rTJ+0C-4)P'zG?C`*Rm V$_aHkV!߹L2z vPlqAGwq&Ӵ9[h|Gx.c Xd #"K.oїʝ3/ckiL]#7rWrI;|?06 wI>݇~hG#j͌A%LK"Ҷdw"zBF.]Iru˺A;`1O"P)SJ2/(&j'ִC-4Z:H>ϥi9z+'!v\{*/e8J EXCa YV2$,JS0lh: GխQ| X|‰Ó}ԩmɀ + 1ta9rWt+ں;`Sd#`%Z Ƶ2P}ZoW9Jb-]W;F{-{?ǀ1v^1^=䞮&. _ PZ;3z&15g~wn +6"TmĂǂJ+D2Ĝmv9X3&d fq w"{>ԃMSˮ !kkyo&'tIal.L6[K rGuBsh1h|IV[pȞP[#K[8&4?Z_]ϟm ZB@i4Kmf9d?}upCq#HHn(JyP5ټ&i| QS%Ýӭg|阚8Jwu:%y(AG{К?BvN׃JE*h򢪩%a8" رo ^iSJM.mvne8ݻ@gFd`~20yHCds:O>Ww$R[FP%؝ʾP2{z@m Ot6:\I'؝+[fsa$ϗB׏ich&MSpJ2#P5÷`zy2{Ŭۻ@ Pz^eN- CO?BgmkHl-dUFkde0٨Jy:m_l~L/yy_p۷Jx5Fo?OS`'2jxReQǬ!I/w l6@Q̳iX5~Y*Mœ)M ?"$Tsv]B_nxl(;es.&Ą3!*=86Ʌ!ǻ#|xpd~# Orܟ8(_Xa.rc9(7' nX|HFm,s+Bcc-O L#n$"uKm=]B.ѻ<ڲH_P`igs`ʧ~')eɟ42zfA_R/}yVOʈ\:Zh\EÚV[NX *#Q&t !Xf2}w] ` 8BH&Ƙzw&5D068$bEz"n&ƅa-4s^&f9'7y]LKa8eC\sZp{wl58ߩ kªxpMtqWxdY2Y٣5LPᾰs5*:]Wu׮eZޛ9&E L,Q*{+F\TFqya~<6cXaq!A\'j8SYmlw')uu4"<5dx@7Hm0m E/ۤ2B#僙N * $. NO&ZGLrōH>fURv!9qǴ o2_ D n.-٢> A"|r.-n3"}FAXޭV$uTrm ۏϤG͟_??/|T.fOYjPt!%6&n^ޒkVy&|TV`Q;)ny*78|X6&|xkuru)~f&vD6`{ҳ58T- (uLpN%u6}sxRdwC-3IRxUq}N `oBHJ~V :׿qP5VwZH QІUG:G~Ơ MwO$^~U&xUEt + g]f\9IC&g{O?A>GЄ|Ԯ]>NNN2)٤n3$>[l,QTBEw"o3O0njRRz*F$"(=ݑUyb6[z $΄|=Gjop D,f_68:lVbu*fHbSb ^{Fh߹te&*yQ&ϓemo[V5&H~W/IFs*,)=Kt<ωX nDv'XSԸm>#8pe$J.n.u!%yȇxsAeB 6h{QH:$IDrև|Y6e<8CJƒ & Ue 7m%6۽SngAɐԘ2 JRNlw (0R^Kͳyia\%P(I u&UJ*M^o[?./p;]xHIBz*8&R|CEęCNf7x=V1kI敬OD肠}uO”$\-a/i<-gx,ڣ)@px Wa$]Sh9mdˇlV<Ζb<[),BT՟(|\Ϋ/wv"q ? f |3r7U͓vm?1Gt[/帘-L|"|i={^d kWp * hKt ݂8ylc|HBW*'hUBj6)Y+La?4p֜%nw<ɧ݃6zO~q'Z>XfOe?@^AV_`P!ʴY ns ~c/8?~kO21 VH%Ƃ4Et:O긳V0WAxd5G@bJ \dDB+[ gt' o?~䳦0CS>p?aGHYJJҼ)O.Kr}bL;VIGH޵CXks#)M%cx-҇Kt.3^$BcH11NKR$m&H۟%"/3fk҅DW\у}Hсo21{K7FY8ңT#5<%b9%1tC^/6hRkF 3R{ Te;P48h!y(zIʴ( u[}D.̮ph(ե듦o-Hw,ӡY?lǍZ1m w巋 "MǚIREU;OXrieËExL9Kw^F\W*@"$[)-AiU'_FA$o2j%$.vv읣[iP:JSkqv.ĴKskřw>n'Ն3w41Oo^GB7Ky-w^E]R蒝tlg1SC$y.n,5z EU%HKba8/ŵ>C Db8Gam\321^oIxS>Ч99`OCH9 n嘌oe&Eۤ2k[!A%V[l[ I(s]i2,ۻ>:8}ߔC9PKMsQGw 9sߢ-2Wqbڼ L9 'Uh708 k;ʍǣpJ1/ u17}D&K\V>p%,G-H `EndcP)B:hfx{?znzC(s4Mipɉ8 SӼi# w7TLn FSvNyz[qR`S|$3 @΅̀V"?N}fBze:ƄѹQ)/IFo'^tVSXi>Yqɷ Y׶;k qͻ#Ӂ#CڴvoW-4rާ7C5i MvdBOs~"vPFxlS5([>ߗuዺ;EopX}pp?? k'3nqB wϡZPXɷ/:Nqj1J: = ^pf,\. "7]Д .ߌCQ߄,ЧA\JfٶЮ_mkqEv55#FDFe,4 mc<~Ͽ?dwo-Ω3Ww\ .O_7z̉z6H[ߖsAJ{.jw+@So'ܛ3" q:) ˽@"z|W!z|Wqx{YKCJWզ%"TF{([J̴ jpg ؗ^5 M,I-O~8^?{ P3*sσ7𭁟NmɻaBNm@jAN3:]L_Kuۀ*xEJ6qS*dq)1s ru$V8$>5I't4`2[">j@]YE BE0Hp-cGޡT1!FKմx b$.ߍ ,XM0vVDsxM¥qT60iۯ A4l,  Y,(L2|?#ǼUr0˞nea$@KEIΘKl &uUFkOr EFRJ-J<FgQ L/>醡 3)z-넱i*yIHZoHӄt 6$wUM Rz 5AӚN~kAW)i44,I%S,iPpFs`NϢIvmLϟ;|'IDt")*$\s'Y\ĶR$DS+L =c"x+\rl[@f?c&tk5 &@@A8c r=0^2H0ex&Q-g LJ8Q t"DՄBIeJ)E(ŭuHSʹ7yØDcM瘭R@xۂoZSh.57v Bc*nkMI ǝ**ElWk-XŚy?Xb& "* Ml l tU~.b;e yu?X_DQd9]3q)SџYklH$ðA?"G".*8+P"fxF'0} 9{@4O<#3Cqz.ziFK>N(s?Z|̅Ozp?` B9 +}3*NmPK"xk>fg(EB{&g|f'>ozd ab*&hu~3å G%_wZju(r)zd_Om#4bx }OBlo=)A|ZX J ;veIz3⩜ʂAXZ"=08{)$9㷫mDȁ]!ǵhE*X3%[_*!sf%3F=gVW.o=qRpd&p8W!TfYT' euYj"06)rZ?ZUC܃Yjtץ= d\f8krqWCѠ1UaqR]&VJ+ 1s{*%hiy<'5"#.'ٻ8W>io]%@qp<$yJ"UěwW =%5\CJ6drw43Uu{:n^w;u$ Dr῕=hr Քt`IL. Z8uK|PԎOl>EvՙhUO4'ă A0rj&"I\yLhfA n` i&3ZNk6SIϲ8tuj6,#0*&[9̰TZLHOYT&M)kEpٵlό ~%d}Ӈ=!bg'բ*!YpOV?O9'ֈiMeW9Z!颟v=P7ndSzM.M+6vQWC7FY:y)UVP=h&:q[o_쑪 _`npwx0Y:[woL3g?j/O AI?>;[']}8;ަ=m/}CrvY>!mn봶̲dy3p:{œSKdsΰ11vk)Ȃ2JoyzXbſ՛?9{bVV4*uIcIXD_sR5G .-bWQ߁~2iTLoY7EK}fuҽw~yy ֱ|QLɟ?/N9biS;juϣӋݘ3e=Z,q?.n@;%sڎÚ H+u_8俟 9O'uxŚm.`6BDڣd59p%3{>h~Ї?y~;IÁή7ΐA8 DH ;:1ᔎ|A E\0N>sDtqX) Nl' Ei:bV]]^ڎCj .X g[s_&)BvJ +0;KdؾmThه7--ϻ}.=AJ*34Bt\!ۊ#Xzcv@vYD2VG'-J9;{qQby[ɷXǍNyMOxXb ݿspDo =yUz|~Mِ̖#^qߗ!T\32yc T6pF$o,T(H6)^5؎c. *r879≍|ڹw_vlRM[̏}ы y3`B MU+WG}geVigv#^wnMo[cP|{"",L,9k*K1,',\oebbtQo9ksw#iըm=m,9z8dF͘f;:bo;\`n&vMf* b^a--7-3 u:Ŝn(x=զ؜[60HL?|"X)0(7AsKo!T-=Ea*={(inC%Oje0^{ȏ {BalgoH51^wqGg:- 40W xxv ;jxH;1t^ryH| 2Mܔ4J5v>'ԝU<֜6h&N#4;yo}@ D5;Ӷ[<fa''Ѽ{s z0zǿ򷿼4ܨc"˳NwhV}_>~y}y Zkn kʊ jDZ'DkVoG4E/+_o/J9 %ǫ!8Ngl׃\unfLӂWǿ~5$߈=~Ym ty_˒p'PLNrc磆YsbpУiOD;Nz$ٱ:/`/H| 9poV~A lRĜQ,#hh?*B*sxⳊMSApmxJ5Tv<ѪK-IGĶR`J b6ieQP!0D 't"JK(YK7 L- 'eK lw na2I0n(3 +x`cESJޕҔB-蓭1H 3d~N  sys/壘6 ;f\OwxDd[{ z`:3&L7Dm-9#`:KcặUƒlcr"}mLFDS*592w&&dFIX \eLNgߜ={b0TbHimx/xz/Jw[dlmkJɵ Y{k]>CA>t6MzEX #8*U3$Ga ORF@|@vq9ߪhCXd\trL5^= lZxg].Iv;=W#x@>9XZKSk921esQHS+b쌁}6o.&plk oKI0Ѧ<0fR>UfyVqƒ k aŹ {}lJBTosXs1L Q'iꤟ=JHt|mO/;w Vu:&n;\X 6[2eI&O|Llɖaɂ YXUQY>*i@#;b30ibcb[]YMJ*©12R6\Yh5|9e[ w7-g5Ei+-?P*oBJcz_'jZ= G[[HD3diT* Pe6XXHIk ]EОR.|2޺@Bawt`هTg/0i="^/0v-ɾy~fA?&y|gw8**>ȁO"BN@V EFʎ啜』EtviLks)VIMc>mduxr1upob{Z&%hsLP g corJ)IEL Zl+bvIUA B9-U h7K+ɏy$a[(ۼVؒʫzbh"2:\{8W% `_ h mGVk$ { TuWן Q X鑍;0K]SJledHyJbl&ɐ$c 6] !Xm2-`҃ 5ၼpe"ɜzdT__"gյXJ=ͥǧ*} ]Ku%M pnbIIyL)譏чR%-ݯlIf U@&[`5g$!$L|ս:k Utr6A~[D y݁GY.&_o\]ܾ9(~6v%4A/|DKJ6b=@Q}[{yg| l5+Ooqԋuf8ip'U<ط9vmADx/-0կE0l&c:lSy )Lwk OOIf?6i}O#{Ӂ޹%f|ZzΏ56̇we?Yg>1C/9XBA~ wޏyyOu~fڧk7^\$ÕKufv:1=>h7Ě_>8 KO7L_ߗ4̴q B:Gr`CIZLgڸoc~SE޴Y<ք (rOʥ\9md)HQ" Mϭp 㹻?wc?p3yǩ kqNfKݎ'X0OI}5ߍ`Ş,⼽lyru1 /t5fldg+_hBߴ3(٪[zK4?:~3d25~3sFTMf|?D/]CBӵec`` \k*ut>hT4؇m0h3ΙKI~l浰Fz,jBĀ˙yk]VR> rQZc 02n0ۊTXWQ cgBNzLneiI|g4 aN+ȎEQݪ󽌻CBȏ)1Dol 2Lxhp <_:|ZzjճVZ-g"yGɚM&s4eU޴*d*+Py&YhrDM'flLq82cdR\ONTTsJO,&?qӼzj bNi~=tng0nx^s Kcս*r,_X2m9  ha _V ebs|o(ԭn}1'&eN!= q(~mm9aYoW{r/}h!ۥ<9ZAo~ ca{A 3;?līu)_ 0ançw#CܕliDmCo@/UNY&XrEl1y!nf=aV/^L{2а\9d`ѱ@U^[@ȈqnwW;(tzɿ31@6_V2GoJ EĬbth6+MG}`1&4Չ)Qf1%:@6CbQ,܁%O|a >?w:G=EŮLY@Շg|r~_o7d{j[ZjroKXboECIUa^\ŇۇϏĔhş3_-,e͛/n\|p~ C,NU:O܌f'?}}%ƨٟOƗݽ X~}ŏFAF6=|9]˵L^扂lK.cӂbB_ L$26lS!hPX)%ݬ6}u=b(5EtӄQ^E"83q͘~z~8éN-GfT#a)|=ȸ! -Lon)טH-BvEŚ*Zf\H6dŊ.+3H 5(g[}bgyVGI6Ў v;amM;Wmݱ۞_A[{Skξ/Aq}sGn}N7g!;/_j ,2 i/k!H==UgP٢F鄅jt:PbP-$dad&'gt@#L(; @Ik2| _~yÔ;ˌ~j8=uY;^ho{Y.帙0Dn ^~PxmQlJվUfDc3$O蜫0(nuQSla-6fWN϶#v@(_XsTDE|/D^?!'τ ΞVctRQ1QhEb%8)# HTt?#h[Eh9Y6gHh%]F<_}BqXf9{vJ}~K_1R:%ѧ|CFXML6JK`'L(g>_zZאI9(>eBn9* Fgz p-er,/\k,5X(&H>"bK͌ѐlѩx95X([.(ڙ\ASH ,d FÃgC? Xh2y1j ,Thzyb*FHM +U;Feo @$$vC7xKQ'p2,HO]`c $Z_}?{.wC{q6!m0b̓ĸyK8"{^ æKp{}nFDB~0vW4$G~rb+66|ul˕ $!)UL J5&1e9MQvM/Z᛫1MwX}=$9{4t rP,.yt;4?Ӧޖ%etݫ^&n{y!I*.YJC2为!f[-b DMՆ.2Rg4X5%A[sC ?en)D]:{7ޞjygCEr(7ȉ4"&"rלMJ&#HapUyÏ_y N -/7́ryDOڣ TpN{~#z;bG[X[uzm rxgB;zjød{-8&[vB/N}NKZ.lϫ!1O"LN?5a&H;ێB"<<l D6'M4:a0S3k9e:5=:ŞyϞNHX1z}߹|^zYe/tWB }{TC*wўZ˸}6ŏcB<[PXG(I[n A 3p23`![Xf:YڴFw-Mn8Lună9N7b1>m=ۿ~J%R*Je } ϸDȞOFCI~qtsuTcN`ck a9*,3l r6^%pnNT L!D`ٜ(?%բNWA.\3x1.` 5]"fdx<0>6'jR zl< /1~ͧS%-y>gL<#_Bx?5 ȋaQ*:<=&1{5󐧣aiva$yShF 4p(@p Q.?*?Ԃ➷ q3,n@fN>^V.[A.\3HyJKe\L*~,~tK\]0+x('dlpoV|> t؆o< .Ji= gRL9т_ qI<ۏ`8Ųy y2CR AtI̓+׵ ԋm 2%Z'4"s&{!>@f K2}ȹK=5h #2[){*yb̓]V7#6 suj,wkcg}sߛ"f%^x_M\ګ/ov|v_Tfs7k@@O/?׺"QEw~~}=A~٪y HqI%du%5R=鑉s Yr1^Hޟgt^Y?j}Pq ɻæoF_ȷ{@f.kگ{/"$`wv M(/ L.N9;'pڔ_$@ùtv`wK,|~LZ)H?%E]i:'O3$zLZB( ( k!y#RfosWH v@}$1R`Ru3դ0J,0"3DS[߽~9MXwuGbuuuSs$\؜:)k0JCldoASR[(=TY V޿lųYyR|ܻ4FFEnM1p/v6 =ˎI5( լiR U&Gqb Ddl'3j+'cH]M᪓z֫5B$r/zEeLUo܈R$i^(32m–%I1𯾒j-fHs/5=^kYcb6n|o~A:0iJ)G4ǯO6]4X~b#xߍlƑhⓂDUR_o<;Yz3Dw† *c)/u[_zɮG2F%;10V7G%|Hq}#%Z0** PT6Vae#9Hs\J䣐DMRʶ2c5H 1ΎB-=ۭy^%F% LBG[6Jfq!O8,IIi!}.0wq[A_s _hz׏ 7R(FUCBa> gnFKˬOI=/kR+i/l4G-7x:kI"6"86`ښ `8{j>  S]>|`њJn$ 4Ul[5Ԩ 3#ki%!Cd47-=ᛌYb@վkx3,iT w7K{Qvc 3Ο@js~ɕ5FYڻ&7H dWaWjpfb7T Z~5"O.i3{3g6 %j+#/<lKK#M ƻ2nfv3ԀKFNޛMIyVtbc2cǘ$ ^ &lA~p*h I}Bg $!D\Zo#"h$ȸ"ݳKD˕ ݯ/# l rBy9I4£VZ\W]4؃l5TY}6 _` (k#}!kPل w`/>OLR^0fcmsbX"zd\s) ̶5`vsl<?w\>k Q2`0'sbV~)WU4$j2kȡQɏB *!J 8Vٛ$(q4 d%IKc!C)9.kȬጹ~x}[컝to,gGa($[8|yGؖoKgl9AC.2QͧaZPMYDRL@ƄzKiFM8ĒRd9u̕SfNM*c0~uo/wirt@Af0Ks6_j9FtJ0SK:mFvb&(^{B_8@ ̦brP{p-Ao,fDlg3hOz6C+ͥ<̳\]kebsFHA4#0b Z<j[( h[[.AT7ڿzLg!oňŌ;(NnOPir1 ZF6 bJm$BwY%ѨhՏN ӺAsC[E/VB2Hvb>/ ibx*H/:HǬ:HK?q|1Ǚφȥ.f+# ]B*1xf}hvc[oF~K)fF{PNCpez-壊Vs w%ߩ4k*3u6Ecċ@#RS6<Qyh힭ț!ObAT {+kjȂʤvTUHn.2ٷAߠE`eiqp8]ŸpM"* o v7F3ѨFY4-0yzFmĸQ󐳏klR$ w=8{"K!8K'oE r_c39\ô}Q1Y̓{7?+dK50R c^FèfnRh\jh0'#:7$B_YYuR/C:úİ#iz|H0!Krן.(9 p0퇹"zK aӇF|[ZB!N7FyqDJmNlFi.̓,GuP ^(Eqk$Wmm4{18Eh&K7)C!i,IhU"bdk$,,ΚtG?J%1=Zty$36.RdQHca(KFUm2% c@CggD!kdL-$z~sD B~z,mca2srӯb>~Dd2]oGW}ݎݯA;`?퇅S-^Q,o5EI#r( 9^mr4SUM] !8I [EU~/i-`scIWa@*q<1UĎl@-%gR*] Tokc|rf}#{9&  Fԕ[%K^L0D\[UY+:o(ojԀ`Sɹ*iͳAU'k~SN3-X%uiZҫ#FRڎka̺pCǎ{!&9dAW8p 'V(V K^b%:뷂!df g-Ͱ`NU43kѢ a,Qq1Z [UÔu_V)Z ihBs}ɋQĶ:Le]DTd4A3`~&ٰu֚_r Ȏm $[k]cki+0r֪yƪX*+kQV;G?y~Bxލl4N)#;u` hV}N*JB;'5CVs& t9gڍЮK9W4P͛Yd9Xܨ& `ڸxݐŢOU^;FZɹ8r`RtUud츥qdA$VQ"xJ>zx}еJ^,GMtzt2]QZsot~xOo[-#*q:.s 4}y|>T?}MgӍ6|jW`֞u+\-ޝv%ñ*&e,ѱfj#V%jpH~FV=r'o+,"'? ˌq(TG9e~*N F5U߲Ϭ O&WRiN¯ncQmJv~]8n|v~\sjG/EY,QD+_bw@ vLXM0ќMaȞqJ(mS XI4C"TJk';+ [tk}죏էQۆ/eکprGkp ~Rd;x">%gP9,ӭ0릤:ò][ P=^r~Gy7~T"jFu]h_5m.6-+%k*Mub缉﵋3v?y*xg?c^VyyU[CS'6)c'7[kJƬB͠3d'h96:5>$!BK[ߒ0tc."a?퟿9_xeW-ߛPC1Ҩ1뚨>AM|W̜wxo02oHP* -X)3*H!T9 ("SV*r{d,-BCR![ ei$SdCPN)XJD&* y ȟ[MU$ ͇ۣi:~ӯe@6gS Sd4 4UM֔O ]Xmy[T$铬&&ADcMNZ̄D%{=l )يOAHސυLaɍFJ4D 3)i3^EDQkVnmv`|3?8=89'7_i{< n':*9AcO] \ 4lK)a U,X@>A`!r#&bUM6kpXNX7:yȆ]CriLBhkt'0B0?B2JBɉXȲ1Y.UFy$1w&t[oqRslz|ձmcXJϐ9 X 6x  &-4CzB%</]ibQ'˖Kl-΋Oh7f&$.ؾbh"DŽ1Uf,#kE LFtk7' Τ1%2delb|[Ǭ?"۾΅Ļ@Y`' %AF`lIy$M`(R`kYCHP)lHZOOWP|`^]6߲>Arlu0"X -x(˖)$|,Y,rB5hweu7"PϻWkk` OfנZWhRsj[SںX#$IHұ=C.6vݓqzU[YȕeF KkCj&"d=w.T{NpRntbs1}}Gӧi3̥]8$)50`#KNҫFmQ7'(2 FɬՆa{R` sQT{Z>%mcbwؕ!gk.H,ItebL^rX #.۲ڱ_N.O4`ɔF, %Ce ƻlI^EJC`daEi~Js9Iy X *K3a6nѴ/yo ! M /_S 06]jLq4Pl6xVo\F4oK2u=I%lAQ|D}0 eq@mtRsu5%/F!+!]mc몹BvHjN3cMCm[qn%7_YEKam ŜΓZW{)+}NA$4TEg}MR!>Y%|.%ՔwHxsely^"hM=ٳfN)AqAU 7V;DohQQm v'qֆ4:qf =;aJ8!)W % vc\3eDewuq)1E3ޚ=vt'Y0cgmO>gPZyc.IFM5gCIC^bb}Cn K)jzTQoϔfA>() "48$-{4J|a3S% XUjQ}k1iNf= +֧gM8Ϙ3?]WlO>Z%YCX-eFj6cb?:=t]unЇN8<~,^txUnpŴԮڮepMC fgX@L6D'AK$"4t=0g7P$`אǡrۘ-\TS2v<ɭ^h+H?w|?rPS)4njucdt;딍}=վX-z Xg#6ړE%m+^ Ik4&41ĘF _{hP.')H9OPkIf0c7lm5ݦȯ2'jJscNZ)u۲ǻWI#Nt!4Z1d9Kn2nU q km5ۃK6d-;bQDiBc]hRWN4+~ r%hգSpu l:Q$\冐Z76d-G1EեYhFjg&؝E5W Pn/sWnp)XLˇ}CG`(%Fz[c:ZѸEMNcՈRGڥP:#J}(R 5S)gÞ:oD/ ϼh}MGh7ڙ*l@3BjI1NOX4B=vD1B' E؞)aY)ؠڍ\}^jz3l|7kN05ke$+{(&}9ۄ-^MOlpW~?r\r\r\rܞ9A<;$,Պ*P(b)e1EQtR!/(F5{}g,8^oGT\CN~y'F Miџ왳yЯ'i7,wh< C9.h,ږ"`zFiYhJ~ iUwf ^P!c G_Bβ0١0KWs˫# aɋushޅ)yZ6:OY%['?20vn)||^6a?W3dVޥs^{f&% "7HpxcBSf kmH৙ٴdc' az(9v߷KҺ%%2A{qNuwz/;C1B%{O)7XkZ)TT_z蓔IwDזx*:' 4s/MbHe{]I#EεPU{Nz(jlf 8cŨm'`j(ZCo@Z:vkk=0}~w'^A]!5ӎ)ůe&:Qr1*Q)ᵑXeWz\0< ,)5vKXCk{}C'uBQ*RP:iY⧕.V؆o@;NQRbX*1 J}Zt;j?%o\<1JwR]u K}L&Ź\g| i"D&R1ž/5S`UhZi2H\hתDe TrDըT+YyOUʷpfè.DeQk7]ZWZl"F|<<g'5LD4\$(Xe\vc\Q,=3OIeY2KF Z n%S?1h4ڸ] .\l!ڟ ^6zOף9ҏ&GWc nng_1 hMGPs&/{Cͻ} nc9od W^0u"\c,7et~}Fo ߑ񯽓hIHFR.6N*`/W5ioQ*^:?E_y^{z*ɲ*:A\$q 9pBuf]8TJKh3Q8(WF҃%<PF/1yAst4kLhIdjIFM3-^f WE.ӥ)}|Q Ko=b4y\E |VG#>voJK@8[g` KK75WWi̟=~jjnORs# %Q*Q!ܱi?嬹}&h8s'b?~47=g=e߸s1xs'SEvuWwa򑝼NM\] |iKG=<;NŐߓO<3ifY%pgȁ1-9 inh$;sM=> e߭R.@}`%UQ\xRST'b'C@"S)9HIt$Ή![#d ʒ5!.i]y,SdW&I4a UEi噕g?r8E~gG7DUi+9:4rFMVpn|-D-{фnV?*qƠh BJ^Va^{6UESVZ#1Pm#C5*f&W蒔siadtr#I\+oI 7 Co߃r8C6}ֺ5U>P[kFGf7ӀP;r3x|=*fJwjј ֺҐ5X!2MptS<4tE8r/Gar]9lFFw~@u~PQS~rxQ-MLJW5mXDTL3\'̄5cQ,x8Fb$p)X),w87坏%ּOJʏ)Ū 5-(c9oj+&WXTeH^_;+U} [|6ՂokUYE֌϶2>VY-YMlӃ}e7TF)wJBe1a瘗gv#&n4 d-^Uߗ#3Dzdf1kQ*ieLCBP̬7D9>F1Ea2̀L:&[B 4% $JPb(;dz', ,=x98S x!6n|{5j{MQŎi*mA'{&ueoW@Jݶ+x'X>Xy+)0hfow]亮U6a `9POQRnKABI F͵[ TWpQpA=1՚C#R+{\}v} ]nn(@ՙRZ+ī׾-|0|c{yscx}; \nwB)ͬ?=C#pĭ4e\Mײ/PhMD.. "-Kֆ |Țl7M>Rʱ~-"k9q>@cnzLvt.˼c~w5NbxݜGtM}IG<$a<~*j~1'7 F0SGTFW7OI#|w8*hhO?|6HQgE5fO~^ :f~7,'/ϐ-+wrZ%SMyH9eՐQr: sҊ%KBRTfaQ6,DȀ!Gi,1P~(14೗'0N V^B;%ءW)*S2*A*\ S 6  +h)]ZSGZ `JEpKڰz`)I8iǸKSsL]PdVъy]J+i1ּ(;|q/;K<M,F.`6DjQSJ<*ƥϝV:)hLP YaRĭ i*ko3B8>.}ݾDGzA@P G(mFy"̈́\oL:n$-03ᚈI6䡂%3ʅk{+q2eQrw5.)GaR {:3M/Q'=dFnjpOu,ۻjwJ T0sCุDm ˜YS:d.{T)Y%[Qd@SlxVp#I%&A PKD̹ b#(IFRHZ.*R<;vE(F9s\c>-C'=.4nʟ>uO~B '{:keږjE}IY+0ɹE*Q*\ٖ3DH{\q-!PrsgLbRjnn=Ifx궺p @@ Dwꎈ[a *+nUBeTf' fd5=H;14FS$QEgly c %Gv>N]~F[F9ps#Tw?%LmJ[FC)̯n }YQn|d'bbpk>puMIS)˞P.(l,˷) nn,ܢ)Y)K,&ҬR,Fk Ym963Fu,|mpBZ~ZR= ^kzZgl/sOo)H~cp.ó˻aD ąO_Ef6Sɼ<-p5ƣXcFg62Np9mjTLjh.JoP,@ـK R7g_hD`BCF|Va=.E⥔D dIbA)W,=X[s)АtwLQV{ᡓZ֗<<,>SqR'Z+o&D!GZkqVb?S|? Cn &H[\4Aƍvm&i%ٖ+YQX% g8¬=ҋL"+p%8DqD& QSbR#€M5)uzXD׶B;d0H0 qqLdEDN@OlO42bRP\4HL&$FAI)7-˜2KqHh') sZ+ 1a$b)atp=PO5ta.%J \gǾwbŜ~3l"j>*5؏nAol ^d{zggkDͮ$/^?/5gx,Rv?ms>?-l4xs<;8;CWgƓ =yBDJ ziLKcqK-0Y,P\H w7` 9ߏ,@ 'PAT1e͹OR,ML.oDh!ۖb8` 1V`Ɖ@yѽhhp^(wIݝ;4{#5{sC8 : X!Zqa`aTlx`D1<#\0l2ISkab$aD]Jý&jօ\8a "0 ,XTTڀT`+fjL0:PSi]K\ C2ǃe`#[NTi*uh R)/4JC4Pe݋[Zƥ6U)a+RTI$^Uz8`୭G̀\}&HV"يZysKg[l̪-XL~n*Rs e+ʤ6:V@8ָׅO{|elo48;18;;HpR*Pr0R9JAQ|*lJv[ǩBzI)J=&X8wmmVq(K$O0V ?&;BVfi];2A !bfJ0ob-+@r:f` L_wC/ϻ}zvnlJr?#&PR"ۨ?fXٕ`0IRJGKa);srgJq'r9{%Y4jՒ&1?QD[ K;YxѼMK~*O^g>X-XzT@e-<ɐI(NxN%v)#ϑNsvq1;`TiکC T4T: $GcjFk|W8R7 N N9@E*ucT10z/M0!X0&X8EV|wVf44cLD7T _nF B@)ЗEsxF̓ .5%PR8%5,"wWlCc(7P#\7(M0 kH.Yi x cph2c aV[8gb+cr@r8]ȤY2k[=)׈2RP|+jk!jȟǓ3gb}(БΧ ].*BbƘR1 4D$HrEԠJH>S"VGaJOU -OGZ*i3Qzf-BBh:(ɑ{ƶԜ$~,ysLڬw9W]@5lOޟGLit}溦w6IgW3d#B8fɢ'wL<͈]"0M9bNq x%D|#uv=v0{bk'9}J S~cB\Q^:j82N/zĵIw4h&ΙgkJ UpYY^Ş|/gi`Ka8j6Bwxß|P@滪ɡ8j*IA;w6%xZlt+$q-aCJCbP)i>Y˳emZ?gm61.zϦ u"0j jJSAVVh9{@LS6v !TBUZ| KSa #=x֥r~dVk5׽ h0.HA2 `* ȺЀv64gY9I~|WHUJdh*5_ݡ ˃fY7_a5!`/Pt[GVz;.*|q?VŦ*NWjioYz: KסBr;Kd[lut-~C>DFo}t26rP+EJݕءt139' ޗv\ qdo7CBjQqKMptEpt 1QyjSVHʨX!#(Ħ&ViijZZɤ*`d2V9JW[NAhE`ds%J"ZkRGxJ:EX 3N#OO#y!-FI`TA1|"ᕤ`!N T XZ2chlѫ"S'zsgQxqlۏ7c<_+/E5]ij OBv ˭)uE咬D{}LAՀ,FDY1ru gRQhKRR.#0;Kſt#R->_ ;$'15kۂWTfn_l`Tm0^SIvD%Q_kl5m̎QO:mXR!P yaäqwOn{R,yk`'t| YAj]p\ƥMThʿ׃[?&>8YS1'\-vvo5.EEεO@o~}NWki B9[C9YQ?7:`1݇@ɐ{νW];撽|8&ݧ \<;*赢M-Qq;qJ qL䍧o*XU:\ ^leT]Hi܌!} @lw+଼}=ױ|B܏+R?@3K\Jհe 7[=ÍVj+%Vj`'?M*v7yv˴3%Vu8Zq DzN*?~x4r.50D)w3dk3 lpFL;(LyJhP*jTDwZЫF؋o8|2a4\0aKxƥb^iIB !B9e@XY!&Xo/I(clFRUf%a%`r6{s`*rDb%/,U5k!x?gnV+Xq a3y+`ڹ)%Az?Qw]>-Uec ®DN+;e\C9p[OS#]*UlRADƼF@|#C%X{M_Ԩ~ӗW'H(.i2I%#\x#nRm;4#rcaۻsi"_Z`/Z|˷e}wh=ix S/)˷_5~܁=;נz M*:~ja;„PLޢJ ֒f0v4蔢8l v2[PߑKcW)ka~2Ҟ>UPlj4Buc{wtM,5NxtM~0G$ZWKigLi !Y¯A)1L+8AyIU\mӉh 'H ؒt|9=8$VW`1~q,!*%PRRB?8*(3`L|E40L3h^~ʴk鳲ǂx g=KYÜGr2<{y~o-c|h.%tNn/[HSfIڗs;oG"٪vX$ g?̦o4s'L@3o:ҋp:2ΗI׏/#x>OHHgEUdwӳY;2;t7\pp4}f0@>`'}xٿȜ[Mwc۫uw2 nvzEtKpEnBgႱA[wFqv0DC=Ż*Csô/COo|& {P:cXA\OWB-SQ\m|\r-;x>b:3fSgvvԫi?7יwZ1>(C zky OE.?, -Cs󇋟`ߟzFfԽQ|]>/|T7_8V=H~cF)B1g,3zϳWc]5 _&; '4<9-p|t=}x7,~ @q/W3\H_W5b@/W~& .W|~-&sf9fkȒ*%VhC:[DOWc L)ƍK!* kLi3nlu/&g0L`.Ou"Uɗ;cdjao\?uC8*&0ϣ6g{ُϗ&c8|LgWeT %`谁[joE)Et"+x".`1bH7!_ e8慭 zr^X-#%_l@d̖ٲ"Nfl9-I/%QҒ$U$,:Q<Iʰy-&aUw|\fKX'd`m^_3K`aYA%L#C4 +%weqHzfG ,;ݘ>eO[ӲJTVYuHnC,&E0v,ZKb.م_Fk]T՗KtɃ.Y#]K% ެ%hK?'ML0*Ehr/MDcAK3%K6#mڼUf^؇}a2_b@TNV=:XWjϭ/xin:XWOB͵_>}^ B1D]1亂t'@2Dit9=Ӫk@biq\,5j8q<??ǃx1sB@*+VdJB?c朹؅YڄXHeG{:! *铯WVT4U2]6̑J@Jr8 P R2ecWgls=^a'B+!wzC}}[/B7c.4V!01QFEG"Z40&?iRߵQ/)eKMX=TQi vIM2[_ˋX+dʚ2>TS}{7k90wۻOLAJ~8T4O¼j`S~ L<}7bT#^9s7w8~bߧ̭&f%>|"?\Ӂ\w8.@vd0LX7?};9]kU9km:gM*-':5.;i f4ť`z)}WO[h@w?~nmCӜ/~k@*XB9 |Z7SnlڻaĮ O`A*mk\ѰwOdZ}4i;/[$bYeҴ5輸F.#=D\ <ٻkd3e6l|5qp\Jo6Қ Dh$?OnA %>Z/<삎nw%sX(st /x?1)0>:gwReSJĠYDR3wV:ADPVh[- (ja}>nȽ T jeN'(^ ^˜ 1%nomw.*:iD=RnCt.~FO,Rmכ0 ! [Q\䙺%¢'(T;; B+؂QL$hdAt69K$ʠ⚯ 4|U'-:6Ô[˺vHZ$HrH2 O1c"&Ak-%EqIEچD:ؒC/EFb<1$Io+٦3hb$#@'Pi섋ҥd{g _pVvgG iKO0{|gU$IE[KhDw c:EHg![c+1xǺ(]>ٰQ=N6xKF] Ra|ڱN~H/˝+q(aMjM)?|軳,YM;'m h4XE퀘#;0=y $~]X}Tf[oC;l?v <RirFު,C4/xl @h=.nmݍ9hcȳ'0 4wڨr@ Öq^ƨd:ae؇[},pޝ-I*LY;캵nW+^n Q|$h Iΰn=`m_!b{d_ؒԧ-ڒ}hٿ{&: ]mA?s;j evYGo I zBbVՍ{4- r[|׮%$烦oZgo%߸+puqqS7 W7fL0Dg鬹sWK檲 cpkPnUF {˘]ۧgיKϋ"Q䳻Hn1>CX" _ }*Lo)_h]$m~FΗ<@} ~bEt[L#IK3'aL a}ZclG&DQ7ι{JuĂ{+Wܺs4o+rb /,&>N/*U! #OA$,4τ@oK~2:ESv![3*y2iWWӂ%l1> }ѫp>*!W6nV-|G7Su98kZry5]q%1m(z 7u#~rL*.9~ZR֪xur%x5玱,0Fc٨DJ}V* :ǭffp .τy)˹ 3&W%?=!NM)BǁQZp"Q^ qS*OQZ6&bVTGR:fYs*e39/Gn_ߞLo%Xe,ꮯ|[΄ѽ〈iN}p޾ۻNm+*=$ ƛ%4Ԣ|_Hx/00f{nS>bni{h^svi[ܞw} ҺW'' L]עz 'Vkpuvi(Z\}os҅{Ƹw.kx= ZƠm_fZj|#xw1h@n:L~]2@~,4A5  _qL45ɍ'5]SfOo˧1VC 6?XIs[b{;ؑSNS3S8J g+FAAغ1@PfcA 1TT|XRPe~BN03YLV ι=&L1\3!BV{*۪Z V~lz@T `0`F"Ҁ/UP]#(J+ A`̻,xddZ_2xxXBz)#q7EE6ݞ i٘_[l ' Esutrx/CblcM'$-*/s/Y aq3B/ [bxbsƕfQet3 i\X$B+B$0tL!\i1YR8b~$%#AXe)?wqŃ\[new?=xYp7LW X1yͷ?|Oa’01MR}w``p̜I9똀1G7s*-̺uLZfYak9^>*7dM{/v;ߞiY{8?9jlh/c^U<^Gt\\hj:3j#?/0P}wWt~Gp5~Cir2|,:t"i_EVtA'^mwq:I ,FETj ҍ>tA'^mVz{mB;K8+gOurAp!1k1`BM(I-B49s̋U}P4r2 K1!jbZ`K") $9CE҂3 L@Gښp?F%'&$cbuIFGSk7rIhBqIȤot!RAj8Q퍊9}spOmP!c4>r`6,"љs[斤&΀6dbY1ȃ`Ԩ.y`$.O-v4a& Q4q]kbv?RV^`rDҎf[h=Lda4[;v٘@/ܬZ&euɉZYJ 0 yΓq$#e!3Eγ`h4 K}-r=K%kAfg*SږV>SC (`@E)Wk[Xz&1FÁ,m[>R9ϝJ+xKD*ՈI-]6&nC'*:0`yn,-~iuth0oΒ+!bˏ <{-%^|vW'gd$)tdO[YvX-U~ g}wV*WW,]W’ƥ"_>?'yhVu.AAZh 2qmB h)48OɅhlaU^xSBaf`+(/G2ElS d:۔ `5ME}c0TgetrFua%H16g4(_Bh4CkQ?P?͢R)IJd.d`^gO:8G";@[݉DgE^3C#n2f_3hayX z2Q1bZh$В](T3Vzܓ~ 1闆 7r[Ic^Xl/!H9OgH8c{I&@ն[ݭ&ExL#JVg:P=z55UаĖWw7%{p$%U׳wMx}SF"KG(ESSaT+eU+[˅aB~6`Rf lk`؂ykSp&J{j%5rF\qHT c D}`\9l8s,"^RX^`mjiT6O\#$3MB55|$5Hڂ[CE er_h%,Rmu>MZ*!^߮ ܕ2Xmǻ#{a!O]ve䳼kth*H-8x\-ho>\]a"/} %u}s6OV; Amavcu~~#dթU&* o},5)hq`;UPbN‡`Xr'bơ֛`ebioۊ%w~cP7UmV5 |(/qXVPx6F6PXYGj Y^t%ݛFQrxfꩌ͎߯onP^b%zf+NA`,0l>*&x"azVʼns/x]&NNwBQW<(%*v!=!C =]b(bX9bw'r:w vzjw/v?sxz[>UOyrla[n.c?MBq!n@XλJ<* FhXTX޹2p8SD̶ԡ!b2AD&pج\GBV'9kJQD(7ћ$`4읫(IϾȒ({ = l7Y%=!4j̜~tc)pP98&؎?@Anm{ڶ=;]s<=~IW:ZE8vBuIz|(b hhv2€d@\j_dtjb;SL ^JCP5^[th@,"09JJ#Q~H "#2@_^`oC5r9 QF39b,Uy]׵Uq Wo(>y ե\#>iM_G& I}E0 ܵ +: 4x{o×aL"پW ,c D_:i:knkijS1qBJz4DK.R\SB7E8AAztpŔ8YEN@9z'$R3k$_Bi482ٱl)4i3}Z 'VQ h_?P5:N*yd:LpeL"jφW SuµfUGx ĊxG7+@4sB&Ksbvן+{s罋9 {\d}bƼSNrডy4isS2 Ч^8>"ɂhvt((1R(ZSّnNbKo9vDj PEesX*52LmJ&^_B䥖}S3ح`ֿwF;N E߶hI$A-+EW& s1-ԈFpLEsGgo HPҝ`,)&l qs:|8^Z(<=|F 06ϣHI"h`e'&JviW kec1$4EPB#|8oŔ#O-A7M G̨'2TCorϡa\S%B9APFkPS{_s7[G@1 ,LxT26j 2EԔZFY$1s9:5BaqC&x42jƵuyo39oe3eu9Fҭl9L4QZ"ACr)Z7Q+#ź0 Ɯd3kА\Et{MR-W1;Fus!ig1uf4ֺ!_)L$DzXI cθ#c<'ǼaBzȅX^/QHTpOVN2ji/wp hWݹp]RatqBb&?a.^q[0[n枼:AJ#0+N\nONE3拺itXaςE&Ʉ@fXΖ@/'9^"b>8Q32gM@g)I+G̷;s6P;ުƻQ>2ƒEТFiJ6f&MpGNqΧ LKkW9?8Fh :̓cpm#T^דp0֒Q0`7u}[f&< h (-KCL= pfECII֫GT 8"7nz$z\4 W |Ep_lTKF࢖ju[p01C-A2c;Ԓ1 ː?t|lm\:LwhTo=b? i>pR?p?ji!cwӵO Rxoi߸K6tqs4{g燻CoE ICwfJ'f aS'Cae%VvjܶLjt҇f8.D3-/2Qt4?29HB(v=8b:wG3ٿKKocGuSm0?Yu,utb#בB]*8OHUq-u\&%?o1Px}oț, -šUC۽Ow{KyHkBL{a_ڧ4ؔ !>{#Rw 0)+s6J D6Q$;AX Dh2^զDMQB[rWlE6Y 6[yԆ!CL|!H?e'#ނTi ,޸O5˨+D1RjM̸`j/Vk";51|*:X0BQ;ٕ3K:fcTZNͰI&ߌypJ\mF<kJK6-T8rJ3I4Sp29TyZQ t&H6^Sw+x3 g3ء )?N񺸓SbpxfalmhcX7F/|?l*|S|o 1Ɗ ),*f7r)EԻJy6f0s^j/GC⻟~٭/jX~lB$ +ʽ~X>>9-  Ia!Q  Il5>$ݵq$Y聄!p)1 $c.kLH4s1!-m/,P=ʃ2X8QgH0,@b֓Qf A͹2S"bɈWHٟc2EףB1c; |,XǵEm?&3p9QdUCP0&Ǐcp ;q@sY2[`Zm$d}.Acq3 Fq3MfJ̫>%6m˄ygpå{Mاw߅[1\l;\5:9hp]K͏}4mpV@LIcI:o\}ބ\ a=l|Co`hml']qȣn|XHWuP3<K=t}~uhrl9WF:@eq(stt!Fՠdpg#:M5xu0μmh0e_Dw?1^-~:;dm_ -ao7yIE}ε~fThO% ) 8MluLɩg7XcZ1O'.޿ljm[>ܶumú 0D47~HJd9LRRlf\W<9dW sa+W/hwfqڐBݱEH؁\5ٝU:EܹIڤ'd%W;N훗Dሾa.v'pߤ( b 2.,V*L"+u[6\}R6y04faY"cDCZF^XUBl)`EȑF 06%7Q*ҐV)Ku%6r ɦ0%5U[fCVKҼrDj5)CdMRB/'+5e_$RV!4J_Qnyݓŏav.a 5avQ>H5RM>^O]h֫[R2)ǨE6A[_'R{ebK]~^veXT) d+HPr_@{|h*V=ilBz45+% `ޯ` Bj1'd1'+(&꽙u6,+jRB KFh5$5V'5m'[w3 V@O^oA>ѧ=ލL";[.2Y|춠Q=蜜Hj-eG44s?`2n8Zc͈LV1EUaIXcB#) ş(OX "$ pQg1K0N)Y[Rur_;Xx}Z^l4ya|8>!Ø*`w2QJˁ>qdȟ=0ixt?t)wnhn9 zrc&M)q3k$a1_JJ8L+.k{E'3rR16J[DbĈeܴ7=qXYrX&MKL] 6vo|QR&ʜVΟM> Ek(aߤV+|5`ûy`^n>q GUjz*\ӕ;ql+7+$֚3ǚ"q o/])!ak1h~WsIgh^w\XR`3&MC^ƊT2x W# 5vjNO_r&L[ A5B@ a0&rq(]8 P,#DJR3s^u{`Ќ> 'ė8yh7ͺ-c Dy$ J0#t\7ɢn1}?tJQiLGo*'u }_=&K6Krz6Oy\ Zi4O- UH0G gBSLӘĚ$KtRTд܋V㘑O*/1/RIp+AxIhǜ$Liǵ. Y%At=a Do %)"86SsG5i,q/Tg\i'"HPE@eu}seӎZo:x{=^vGfbqe6~&UoRG D\+ JN\P. A`\($61XLO7|Oid/QrbƉ]l(Ӱwۏ&`7VrZV|ia0['-9X'"B軳rvzо|8Qmi5ѼRR8v"-7y4t?BNn.}y/W/O«D4{7/n~}77પreֶ{o+eQՊ߼˩2eRFSϣH9UF_2ThoN>/j:qM_e_I6NYSY#yB0$Q7x~f.D ^O:MYKWk?~v[40m?|lg}nkw*N]u;ߛJ/$8L?JA5A`.dtZ^ΊY [6I%/W(EXʣJ+J+_+5W ҝeS X2g(6h 'x 'dUVfʑ/kwg =K ÂS@z.뱸|{9ZώE7[7opgeHpwyξ gߩ !:YZ,0Ō;y͘}k2{oE.gv֗V'g[+ KJ7*d!tYy?Ms?`y|oPȤކ=b7?[q`+J[bڦуx"ngN/GV䷷g?޿f?_.yR7r}۷7fyh<+uMtEh:ћ"%e'Gg b|!N_GؼsWwvaw6,vg7;{tN? n>|%Gw@^uG[Pa "=D YW T#J/~64s;ȧt0>QFoUwtF]7;zpQj={ㆃ7yyGfsTF$fcL-/|r6۽Y6rŔo%jh< wM~ (={3 z~4e8T5/Vn@wV(FV'0إG1tW1;u}PO˦(bڜ4``Orގ2jK%/ fG2j>qU>nfy`t.{TٝP5\ $v%X{"vVV\[՛qEMV ?Dec-vUW6Bˇ^Bho;i+kԣ_`!wy5tI.9|g\ K]z>.䶎wӻ7+֨%x63P[vvskXh-1u`Qr&EE󽘆-k_lV^`e)<R;cl}U ۋ5[V[k"jQNg).1$I$JF8X fRKk0؁HЉÉ(v&^w/*Rr_}ې0 $HiXdžE6(ʄTϹ3 5C P9`5C TiP 5k fEB6peż5b—*n%S+L\0߽)^--pJǦ`EsY{$Zz<ь*mT6&ґ2,rfȄlB$Mі2#1LމN@lEޝeR):M $ҥpO,6 SJk#I1@_%a.|~*p>_6"Dp>B~ Bط U=n.ք(U.M,[ !@i&6f1Ǝ'[TaRQvB"#@56NVX3""9X\qa%1XU#WѼiX0R%!ac˼#S;'MC!Kx?ӟoލD&E>O2Q=h  \hGCV)c{ލÂ[Ul:Uƻ?Lyi״wLޭ y*ZI<5}K!nyX֙^Ʊ[UB7n5h3WJ:%wTvȃ&Ɗ}l[V&tcV`}lw#xs S16J[DbĈ;Cm4F(`P\KmO !Ca"5q)q&SRd/`A{'anU;E[r^A+ϲYr6][%j8F`CZQhۈR!C4fј s,XE bu.rEϫѼgUrƧc&.WJfF94 \$ )T'`0_PYńE̸ >NQ2OǍ#IvvL*#4Y[ g_nydKj7l"YYYY-s`ȸ2"z#N^irWN5&^eQWN]6))NT^˩w1)tGɽ}3ro7zM=V@StB+COO&g CG$PBܔWnn븩rSf6qS;r? R~,^L &I"HC, K$44w/#!HG. ;g ]٪\FHi]ݐ 6QӐVUF2FPFyË!9EUs0#f6 Pa*׊ ? P~zg򆕲ڠJs. Q*)AeXf(vYSF%sL"0B [Yg> B'Ot"_(!M#MUS4S2ؤJ PTmQaa 4ltjDaiLEJSteߐu86(#يV9U]0"T& 'K?ys#GW>t"JWFVE\p'<2)v`oC[?/׶28c55d? 5զSb'R,;NqV ([tbb0̼ (ci~OAO&ߝo\=FM=+MQc'LnhZ+qKG&|WLt)t-cSq4lj(Gch rZiJc"% }ٙh4) yBg4JPp!Crsmvh_eZz0/L)[{D[Vz>LQP#2L^MNQ 7hǷ>0u[ X}AvpE͇mҬi9 a.' [B ;Btrj(v'7WyQ; P_;B0qIuބ79T{nZ"1ޞ>n Ɂ#B}u̅n=Fwe"ZzqhC#A]\Iln,F%w ,DB*uyB9NNyxe']UKVWxJVugˑvO㱐'0ɖ5(Q-d^G=Jug1"2(7eYOVYCfܱB:-Np2γ1d[#1l$%r@dIݦƏY_@"~NgVS;>oO_T%6Ae'z5&f$k}PRIwS=/:u )}Ӎ|C5}8NfH=iM8=9>ho^ߎ5LC%厭bYf#n=lGv'_9byIEh_7yN<_X8Fc{ʉ;)Y|]ap$|7$ tȘnVɠf\T9oˉ2mNn?/% S Ɋzh b,ВvRNKNǷ%+C;,,ٿ?酣9;W/8Ūl i{:DP֋IXһhC;\i;HJNaieD#dU^[SWQgxP%,8=!w6LF4'KT36 vgIT6@ʃtyam678TFmR:|f{Ѓ{UMziCk 3GKXt'd5ֽϩ)Q"B=t4-zi%mOd Z3-?`@otw؆MrV4ͳgHC#(m tXd͇]́,0F@_=%"";\0}9;ߥ=bB|l:뭽^a3W*TUrt Jr6&1[,б߆Cb=uݛ7,)@إJ VfV@jƍ?/\si' ܬ_o2!N/y.=H twh*qzt !\=C_0{KW/m  tO#FY,T[OjI3iNV]P-}|Ss;>%g{P8JlV{􌃖ano3cS0w$-ӷ~M=]%b=cFH`ɇV {>XGpE\x]~@@جvh.LC>|s؆c,ž6r}ъ`%? \Y?7G J:y Lwru@l5mv*f(7!N1^;O?}sPs/h.WN(NbjBGan8eZI=<|ZIhM>ZeE"'NrBљuOu(W97U!0U\3 Ieprz2A 9N^,l[lk:EeoDs$iCr4CvB>%f7#*T~o'җ.S>ڰY|\73L.?`ю=˹Bys(>=b!4k4Z>pP+atKh09=~#z$RCfMDž{C5y\2ak%'; -×n*8?:l# +ثhdQ$`{Wcip)V:eڞ `5":PBZ;U>%K"ξ޲n6PVU.VȌp;=OƸ8c//ɭhih\si4CBF9>>eUO59]LcVNsIf@fusk\]iֽsF0c̔$_{+δbu#F*I`W/]}t %*nL-aGeݽZwNw [ӍM5V%CC0"Ty+j aB=66w ;aa^NNskth:Yd1,|qon`@m^oQ=.x/5Gx>YX^WtD(RSz- 8Ѝn<C9RS%5ޜ L0wwȠJ{0S@i5]K!|>=7٧o"ʺjUIy6]ɼٍ.hNSAZ2-h$0I2!T\X@_%) Dfא܍wzt\1)u߷+"sJL[ %p&;8gZ&+ZyZ^߾oJi~BH過ʔsv?3K?eEQTA8FѿO7wߤ2}/_mRaط~ݾ>)޲*틿.(rH`Yț?epdRv°熐Dg~xً;,#lYR$fzۿRdBh;鎏_l;2k#(Uh̠ϻMiuz' 뉨*IeVo@x@׿~;9-!r=ZIT7^_ )))pzSM(~kBG\?V[0C(YnxXHC}dσúKCtͧӇ\/n>X lPY5IҦa~>â$FYKd>t^*KZmf'h>{<)MWbp;tsDB:.tH˜زS_k{"f4=L@%ΰX瓪}2{`[`s }<ΖKnOCtMWd0鯐dG/"i;o[+Zz fSnO쭌nOIRQQudރcgK4{\!Z>?{WܶWogKNn4;5Jrd}RHLqy9+onDEK H(+=Fy Uz'rb# NfHQ|b-@,up:!D o<rdg:Uwjm q_ynL!)bbqCnD2B4L ?W@iXww䦩 _zuxBl(20QC^䓥,sf) ySRSJ*`݀ޫXt{fbSϗN#$>E]/(zi {wYĜ !5'zkA`Cr=["]SCm )_&}"_̧yPE!<W/-M`uo@VlJ?yÎ/hy8Q􉞘f57\D*!SbjN U%h{QFWWW%S(DQ%a $Ae>uI$V[T+0cRI\  :H2.BEy_IC%}ɩڶŻnSjJI^)(ظ )H {49H>>s{Ki;ޖY}Y%Yf]KNjyz+g]U,]^ƿ~jD*h!- nk}$6NTN~kC Ixn>[ C8+K!0 A|ټtoڶ1p5 .$"RIg!ѱ>[,h8q]eyh@)R@0GtATl6hBnB)%o$O_FAoHҖT"پ-MV!M I{1z8UlnYj |/ck(/@)j>:8b}XW K{ecy%Lne\} Z.>365?.&9dib$1ɏXf(HT&j2ן'!禽VO,ťKwzMw?l)o^|-K6F1LrwO^LׯMp 1dڕBIitA pXٗlʥz6M5ɔIL72C2WKzr=qElUb,gdX:6pj?'A쨗еBr+tafw̫hPgݦ61QZi7;4ohVidU^|^ͧ^W.&~>Q\WGm%{qZG6n:6cGw~O!y?ܼrzI's p4]4[$zc4&41tѠ׎ѨCzwл36 䣇qn$Z6_ M#u*Y|zLiX'{XA'ncMga~w9a>omNW|ޠ ٣yBtszF ϶SbB tSZWA PD`붠Wک?J+hޏZ~@ x;x̋SP>k t M 灼ؒÒV5녯gZLJb1'yGwgo(Om94=#o& PJarKH# d*R,E*}rH`,Nb1HIfekJ(L(Sa.Jљ O мt $?բA맣уCR)rYm9JY(w<Ɍ?mOn7jǝ gsrb;'DL/` 6yv~3UPA%dDS!5J :K#cSy>RU⩻};!5CN!8/~,bV% ?pO=yT'4sqyqy ylnHyjDwLJLXC38ܼPƉ3EtοLyO^fN``%QMX$8`:Z%Lq , fHLLBd7$4I\2 $MqD=Te 5xR'5R^5 bXL&ϟ;WxXAgHȈ""5b"b~c&fH$#3n3q*bi@8AXDZ`Pٖr aiR(fTv]q\p 0-_p Y^M.P[)PE4#Lh~\T$&fM~M+9nlsDkh> lc,}{|]N^}N[w{RC``$`zb;XWxNK Hqzv¥Kįt(XƠR qdfEE)&΍FֹQCqBQ{RFSR\q&:e/h${=q:cPΧ b i[DJKG9irku2D ml/QNtz _Ù|+ 73W;bj}3 [[GoC{f׏3^g ;jp#Day鈍*X{0PQޜuS֑S(ypcѾemp<BGшcӃY/sn3WRچ9(VJaaA2_8F`'Gk)LJKU5BUxxv pC:soo∻վ2QDFukZ:|:=% 0hrnݚk|J/yLlNOc,Zmf;mKlsay87%*"Bk^N/_U>(%2JN#""T&)qF. Et'zC+}Xro[xg^n00<%aՉoF+,]: |wO}dwX5~Aw2xp;}3&$i|H#d֦l\2Qn )KSrֻjΧӃX/HsAQ^WÅ8`켚 ٱ}켿+Tpf@BNA; IiYH4C"^A_aB8R@t`g݀J*|wt %N(hW5fْ#B%8Z;͟l V\rX̕)K91eEHE+^> ‘~£rS|qnլ[%0ߩ5q7&)YKE$8RqGT)T&&uvV)qi6dIE/X{w/)pHY3KEO00jR'&_*d(yK=khEȀ4sH,@'`<n3>y(ހeqU3(]A_<p;8,S@p~!e<l{`%B08h 902vl:u"`^.pD2p D0Ў"rg~"*杂q8)*|K*Y9b[GB+/\:*Hmi553FOL3ԧXi,݉dwk{KP:…osկ1h]vXpxZ~]1x kY26|*(S=A ܜ OaDP zkbzFD ^h3 6a A@qU]U5?I)V`vQ7=iiI6 );>ZNG=9OAy8,ͨI0cHf_z .Mo7[cmrHQr@0(1^ '|N^}Nh=.-`)%ȿX~)c)fM"tPq }9Sii(}Sd=|cOr%Y4^ wK-C%< 2jIǘӌG%ID(ELH2)87^:sieXw`.b(pFrBoI!GMd脞3 Mb8^W8*x^Pb\y A$uDA ĕw;z 8<=!t,z0s@Qp6:sPSDO/ _#[B/siEX Md@UDjH﷪IQ-.VuWo%5թsՍQWcvHI ~򫳶D%b.DeRxW/\gzÀ$򤮒 !Lێ h]w4yi`RTdzjp^,;"|{拈拈拈*770鼐H'=IR+T ㊄rB "7:9N'qJm9q܅N'yi̽]]&Sx::Sv0KfT軷gUh)+ vXq0Oai02 1N@Ĕ%֒ѡ_\sj\6{dML8#hc"K K1 EzPkd m# (v! !`hdUZQ8c N~&UHEJE't:[EHh^(s6Waϊr9zp>_5'oy+ׁ|q:aLlKzw.Mwۻn☫kTUrzuip!den'(iBG`L[Z)s,G\?y+u%{t7f(g'ۨGD9<8P@m%fi_H|AD(E{i!nyw4IхLԅ|%LWro:b:>7[,\g/ªA/Hn;fHPmn@".r 0L/tZ|,2oǀES 4X < V.8B >$t(ń40:J K=e@kc>ז9'#onfNĴI8&d09&D)͜!̢J!Qx )Ls`E`*eX P 8$SYIeۂ Gn`RV:!*Ѱ$`c$R, ^\/b);8 X*2* XjMEp̏AHQ7N(9+C`GspJ=Q!߽Ԉ 6j`fRHA B8 E /9&P(4B"mN"M-H8 K@$Z1DJQǟ L,H :NA`ƭA| ZĢ ! ppShB@8NJsB &4$Q+HpY,R Cw@B-Uְګ| tV6?x5E\dƭSL|,pHQ`neE )-(HȂgϯs4E)+|k>"ÕŹ{$w.0r[ jehޑE .>(!Zy$ZlIcݾuFPd}U7ˆ.u@i1GvjAS qLp9c bq|w#r'O\)7쬴):i6W{n̒d~dQN -_J!a6?r9|kMs5?BGKj)[a5]hiXC- ֐mvއݯ{Դ1ܡ)ȋl0B)P-U苑9oPQ鍓I989Eڔa=*a9a99F q' mvH,f8t5}BMW;bR@szM)?Rj,1vrbU9 ̭Kyt8F؄ii#cҟ3jp̄=h?(]退}V|clLt^C ܇`@0C!ʉ)o| Q$<5-rdH|2 Ĭ3R5k`AAEO`a5BKq2ԛ )$dzaesvW+ ƟpV6xs`͝OY$LFAQn,3Hc,D+N(cdeSJcujb%bX!QGGGJSe$_a &Lba -v`w)VE0q(,D&Dn9f{sn{І-A*w*h7$(WrdH|2bf{ H A 7P0 *6 *-a)d+Oʾ*'YF^3&mfZH<){j?d S }  ld vD,HB/9#FX"D_f@c)&]:t 3 2R>Wm幢ErLMюڕnח51R|dQQH\P$]+ZF{Cќ$#kZ&ryok<<ƘCT!$auF)f G(3r\#4 ⩌ڄ^2g;ˋB>; X/nJOZL{IwKxI "Eڪk+--QjFE)}{.K85tM)pޣF]_ ?9W)]u[6֭ pf` On%ԜjLYeO--@cH!QBP["gH=:RKs!*6W4wV c;Bk^NՉsԮR\km- d.#%Nլ DHú.%N Mt) pi:;P+yHD5ן|jzPrDr%48ePB`px-% M,8)p"F'0åPmj*Q3ukc6Ol!/9"a^-&*n2HNu[:G$umݲMnm C4S`9xu <)1sZ[:G"=n&n- C4S5LXќj$kX+x8Vbr4cBl*( *8#2&ˆTCΙgudkX@A6rJph8OTL@HGjJ%2H\Vpɰ+͞Z"B^8D0O}FQD+QǺ VvmݲMnm C4 Sܷn :kԱnFq)֛uf4պLU+Qלjn @ Nj-g,( gB&>2ZScͩ ҦeȖUXM4/긳VHm7+#b ()*gP-FVƅVF`'?{OunvA/5q!K2eTFҺjԩHҺ!)޴C'u1X\@'w:m8+TT&wmݲMnm C4 Sޣ߳n!ų\@'w:m(-Tl(TB^8D30(CD gԣ; ßh5\(J!5ϲHNe"$6Bͦz̡@XGO:7#G~M.+_=Zxxmn.*uʬ#$R-iY8ܪJFtJ^3TV]Y8>00K)B i0q@-R n2]E|`yx5g|vF;UAW*!U8i^+pkzgZ=G@}֏"R Ԏ-ni>cҞʔy2!ktYhg/XJَDj8"l\cU+ZyPJG\7f$-*hxU| `[Ye;8ۄe[yeDMfd>[3xV}^ T/vfڂcmlI D]զ9~qfΜ2ic0'Om[Y|@fQuU;Cx2hK=tc݀HjٟMO"d-bO"d-=YOvTf[0߼Q+ØzǕrf('3arƄ .0(l 0! |ʑ(CO}a$*8<(QJ#.dRO e7&$_|PAEK|}TuBW뵧g1[LLj:|պrIJt<8B# %we}<"3LuagP ?~X|nӇK$)BYOe5ܟ|I3E=gE|Xvx']&d~sޭnPͿ]PI)O*,[|NNo\ 6\+,r?i_O):D@g'ᷰ(pZ_̵ZΦ˫7ҷMk*/3k+w8a5E)$57&%5/>DImj5=x?q;VG**O=Ϝ!0Nby|@wJFp]B:izlx# s,κe8(sV|a*Q> ӥbe}/a`~ -P2*D * 2a","lHT鄶k`[Qђ?!cx$ث~:´zpGn.?^\~8\-}cI7^>xJ(!m6gLOl>|+66O4> |\gX털]\A"EEġm2/Aӂz;aӅNtYQ>1ǜ^6Ux~SLyDG 2ob<0 93Tk˔@A@n}ULBv2l?G @kDV>I"O"GK},?u҃a[:NVn& PZBO8 8ngR+]] q ER.v:,="y e3γK+Q=klD$6pXG,hTdtCq^!A<9GDzr(ʈ9|_G =;=\g]HE/e{gR .dߣ4Fzu,Ҩ&8P`Nf9JK8r9r9r9rTj?aA9)U P<y0VSV/!^XMPӅ.~<+ %Bm)H! m!Z- t/P5 &79 r[+֡%# r iP\g%5PȔK#s 0TSwﶔJ2QR^ Jowm)j'fp覆H'l)cT"L Wd>*<~(1yKEWOyAkdbo(=$Y㍐ y $B 5f1)P=JHCz AEX]0N-Q啬6}C2KмG}v$8Z4Nt%(* XNs_L5G5G5G5GU֬& ձJWBI&)8 R {OIa|ahQ(ʼ ^y}y^ 9atF RVjN oܸB8a$Ҫ..h'b~!y B9b1Bkwr&ƨ<>ԭJDg@,2^ ^x'cxYKE@2<Ю( G+|o )u4 )hl*ΚPGs# FvKfHvDaGۿy(/@:tu\3i MG$-(:3)0z# Հٴ E"!"왐0멹m(YC+T={8R"HE q/(FPdyޓ;g ګi^)o/UZA%@ -ހ,!N.NϗlT*ɈrL;8g̋ PJrWXxxYb:8- EdRbZ8 :p9e9Xr4;ƪ&hB2Rc9iDֵb  ƈ*mKIJrOMSôT5j5iWL(8KԐk^,:V))E|[bEsS:6D1S]"M=av;5\n.u"iܱҚXh]P`ze*{ՎviVzXϥ8h&h4QAn-jOxW )5@9TmB%j=DRB20ňCHJel0n "P`6I;34Wb:@*7Ztry^% 71'[K[Xu@ˋt1(Qm\VoFy:APnUQc*.CI-vKUn_PGEkf;#J:Y!GED=?T2]40\P9"+(qN9Y~ :J$Gt}Neۣ8f:g積67] hxq+~`hC}qQH|1=+el~xwz? (ow(F?ipA\r DD1<^\:̿O+DI%&NT*:Cug>ް Z^nG+WA*W&3'q!HH!R5zx|cB /׿ͤrP2lص&:R G bbT0tse>$/hՙ %A\Y}eloxgtx{ I_V+9!2IM#>˅ Jr4Yn!r_FI3۳ۤoEp_ػ4YRz<ǓI~2A+η8Hci{f7IBAXs1㇙-&(>\,1bޘD] QAK_´edo^rb2oZ~2TTj F># joTX٪^ơ#zM㉢egӜzKD[g9qb8[F(sĽ!$gxO;۾ K"CnDgJ H :$DT|^Άц rO'F9M{Ӕ(Wt3l=7!U=! N~š~_)ʿB>up5Hȡ$\\sWѸNJ3&3%rn#ʥwqkh0CD dB dކtnjzv?5ҟŞjړ-*6D{h6,~^,>\^RZ}dpw'><4=~ yǮd@M+ޒ;,g'ʞ#Sϥt4c'5πEf,2A-JvGum\|.Is4-Pq*|tVp3)!Pz׏_9̐J\e+r2%sQ7gӫ3TV3鉠i]f^+Taˌwl-t C: 53C)ׅwuvu6&{)<]2("(4AZN%gJ'm!@ !^{D%6>rr4q hV2^P~KQh&$&E*ڙZ=١o^=0&9mH51OmQ4A#K#.Jn [)M4ZAC ck%eΉ1s c \4we0jѢwݶ9~m8]8qד۫R'D !8 wn{Ç.=΁s߻LoWq'-scDaf祤TA5;}/a^~i8!N1qg#2OU '"2PV⦹(^r+HBZw΢GTLT?MoV^:׿@S4^gh;*д4hrE4 n2FVAS+=eR!CmzI#ՐΐZZӫ~1( =ajV)=u2'-u̖k~ nI')lb`״4y:\pj<]~,N&2\SV~R-?D%'D#y%|X"9hr3#Ct~h1|sC$ ݪgA=V=k~bV.nЖw$uM6\tTܽV)ZE]SDx (

_MokH8m.}SV%:uV\ݱj3kIv%^6$FђeUƚO`cvvw]~!^=8xNj˯vv9痫/cٻ6$Wz4T?ٚ ;|켬WQD"iƳ1}j@5K$]UߗYz'X3,'ÞYG%R\=P`v=s@N%8"erZXɕsQB!o1rE<`}<(Z/ c*Jua<+.])!(x FxS9!V#hNFJxu1]*{u%S^Tӭ쵦bJޫ%RH "FQel`x'GRIgc/Si0z?*5 ]ywA'2u$W7Vz&yzn GKu^j՚JBsun5Bʓ]?so6Z (*|#ޖ& qcvsgVcG ,Q CSk&|u`cщ?ww;ʋj5-q|u}6-8dϻ2dDe±SM t6(]UߪBY3WP]);'8gn)$OjZ!*,%J{"+gB𽠭qNZpep`@Ga$8a_-^%` ; ze#! @Zǁ;k#gI FG#SE,PX)] ó3e,`XHȄsH:LBپFOasLpc jРZ0\8)BGZQ}f#%RNu?doNιl,G*/b)J'M娿:2]oaWK}15mLrm_<). sS»>),{'Zu_䚸Z{nWT?]pw\!Dy[q/!Ҩ.ew)mR)BDu[\ҜK'OzjKlm꧷~m*TwkM4/>Q-G)%vQ-i{ N^jHTڮQKb7糾++5UQ'6)[x`j}q9ůK4[lx2pt\]uL1M?OkfVneצ<=8~jL^\}MG粢Lj|["朶/s<ޤfSXN8 @V@hW(:>!P%kDH |+wX؋oG.)Bts'i1u : yex*د5h$/ ̤LU&NQr("!;[< !(A !hpq,o &fC*U6 m-rT{v16{Ebsic6+:$`_I2BBE|R\w&!-;2;a~x?|$y$9oM7xFUL%ڞ`lLe~k,~xJͫ]M$8gm7ЛEZ]᥋f]k&泹Z͠'5OK4-rm"AyRд*nYyYMAZ9 ,tfrA/,g|#1ų%8S୆RTLD1|uFI]`4:#s/HSh:#H1䂥WPciCcc(/H%#J6:i]e<z!/H a;9VpE#!c<!X/HE[r˝-%%bp}) gp]_XUtNDΎ1%vZ@Oi){;+]^񆡱Y@@e-{*ֽna}ރst3 m}u~9*)gQ{[_[[68ss|]ӹ8w)sޓodK Ww%c s8Z 9P?pȝ\&QRqGN<4тi\#kcSGj=iHK2,]z4\_|"vb S.p&i}m_b-,ֺ[[JWs>[̋d}^0eM+&g0[ԴQVY-,pӴԔ uK~8vIy(C>}/.K -~W`ޯ|NN2|//>~'!cfmɑc${^2@&onUiTUK0SI0˅Lxs m;%wdH;ԍ2Tbp-9:e&SU!tP5'M! BRl7:N h-+kJ 7'l+ޙ T IKBwl9QyuOB DK(oHV<5I$5SL*Ym;5R=㙧- ݭKg~)QׇNk^UŸ~`O990iSp1.Z< J~U?R:,+)akR:+UC &YkGx冈Są#~ ƒ%W`2\Y.4< HH"hp܈….qƌ^8η| ҂Ii ˠ$ | *6LJN,"lYǍy)Z^6қ/k\oܪ~fNO ;@7Ӡ]`FDa`!дqV{W;b.F%ˆUP1CC,AIgqFQBZyS<gE_I@:X!%q#30R+G ڀ) 6h].(]׷Cj W̳(1 4 Hq4%Kp98P\< ]0!F ``i)ra£aR{.4dLZt:A/lKsx-*a wvk޾lC!r}}k^,lyFmGJHK(LpZy= uf.kf?Lu{QZG:HWz uB!1 G /:Y1h KQףRi)xH9>vZZ"G RȈ+Ȏڊ2q)hzgӆ&1m<|Y]9e.床cΌuax׎RSTK>8 n!XԻ"tkzG#(_кleߧi%1yH-YgwS$/KK6_;v'jBMRʔd/PC/.QRKB,lu溲7Opz(UanNZ?1^Z TWKEj `OV@SsS0~.vw~wmno|۠W7r#՛jϋK~'[Ѧ,>j+RԖ9Q1D{Knx:j4Qr(UKHwqDY$a(>_eqD8I阍^4j蹱cj<hK JVnzS]'/hV0x$CY1SJ/hV(vYz}+/N%}q}eϷjKPo>>_=vE4ژil\"#AMGE*{rvl&j~Ѽ(1rWwi)7pPFpxܪZh\]Շ>پNíX<\=t=W}ɜ04:ڋf  WJp{EsBX *ݡWy_l9S5->lH-f.>wַӕ W"A-"=P\4$=/ 3) <,$os3; C3{ o,fC?i' W)4CV;qHS ,ڱ~UD(|%q5I%?Yi3a7tSv- >ss( ͡IBL{6g:|Lg g d]s(0I݁>,]uk PɫyT(DPߎŜژ֫;^5<t_ G`ǘ٥Yо9'K 7ژ R"> 3 g@Ka+OX_?5^uWYOI9E֞q6٦Qcikz|pR$4xXuj9zvp:1|Jr,r\M>=4|j}m{焵?78y?Qkƴc`Zv(sbk21\*E{\8VtEG~ݵA6tgBOQ/]Jq ك2]P0GR x*g !_ĸc>@,)9k#[GzrG;ҵFO MCߵ14Ggi(BQ_{ s*NS2}ս l5^T3]osDxn[z #llgz8_gCzZJ$Z%ˋ/OFC2骯2kβ˖iƯʭ d-u%|ͷsJӦ'WbkebT}]~Qby[{-Fsة=ſ؇{]y1GPiGG]|6g\Bp_JeunSzc&g+=$e0 i6m ~G(`BgSv2Փ^j~ χcg'n[k=}np#.si vVY[7+vu*Vcn 6O:&@lvz&V꾲O*,T4KK|nudZ=s4̭@J Dy*7GpvsrД-Eq13#oK̩fl?)^e>^nZb>az$TwXߋ m)$Njh^yiNQQv 5 Nf׀Ǿ]LAM{E>N_4u|/Sǫ5dā90r(Qƨ5[ Bz^Y)!%m^S*uZ\LUq8l{n0-r:-^|1+JN3_@_"Qϼl_Jnu֍OQrOĮoK`GYr뉙V%p7H5˃{cY\?};z79n o9A}tۀQaQTŬ~= 'iS !jzU,2 ](~X^(N+Wj^{H?Q6hH#0h@ XrkS⇅Wš([5XYc\Hj6^ӰȽ'w ,C*B?vko?{?N!m/EI-bôwn.gcWbf_*swR?r#2j&jd1 cID)K0;TD5($}S-4?@rV9f4tiVN^kyR]0!Fw`!:FĈaa4"{k=F @P]ݰc1C|r 3H8,V1ÈQ=6JxVaw c2Ac$^0W`r4<ԔsYC… AmKMdX%iS >F`qEA5mAO ;њz ̈(c{- NySpۂ@ )#4!i,@ I#\d( 6hA^8T̳CLhgm0D8po,"rb#Q:f4 J1E9D2%Jd^`kR:+U'HQhlpWlFeINH&8ERlW~8~$ ?IMnfFyCٴ* ?=es棝1w3#xKa^e2ѐfTeo}MF7Uh1bPtrX'o&rwkn5[r&eS }$ bPtrX'7}J1Yt˯׌nMXȑhMsrOlbtŠ䶱Ntko5)e;nfjF&,M4˦'M""!bPtrX'WI}΢[ʸѭ 9rͱ)F7N>D\ NnD궩OYt_Z35a!GnY6U9o_hg-ѭ 9rͲ)~m^D\ j2XfzQ]tCnʹ 9rͲ)~ 覫'P9AI6֚wkq]FSn֝wkBDlC{?Vulֵwױ)[=n t::Aj1[/{ e$9wAjUDnmerREP֭5  zh{mԌnVfQO`-EԦ;DQ ZF=b$n)Ekuۭ cSe sWcnPWcj]IO`z05f뜂՘:6E՘9<1w5F=m{5fhv5ܨ'H̵V]17 .^YͻsWcncB5fyDWcnRWc$sWcnVWc՘I>䢫1w5f=ozb5f%j]IO՘"՘՘.+k](jJ1kJtWcjMzfJ^yu5棨1k鮭-֘5’t5ܠ'Më1kj]QOD\Y#EhWcj̍zoXc ՘sWcX0՘<Դ)r4"%`})3ay?< (gWi?}0IJƕMxL[̉~ C>%r082aoOT *[%ϡP XAW:3/AJYoLpqEFao`ۗ?};z79n o9A}tF}#D @5H5H~= 'ͻ#+!'.ߏ7aV;"`R`XLtrU)V7~8-ג _6}:Y> H6%Hu0X<=VXVןFx<>MgC QS#0GK:mܽ Cpo{fnr{?Z?cʡ:* d@c{=$R,r 9 F</P(9Rsl}D9㌏^P jg .(%< EՃyJba)#HFC!H9\G Rnyڦ8MSYjvZ 3A44#<>0KDr b UL0iuKÇ0 iЂȸ6xIr8HIJF1ZhSɟ˅P>c w'`#H൞s&"H-"C`& @(jB(Npژapb@LFHx)85Fb/%['>>PՄP`Ft,RN)̠@c :wk=N 'p 'ӱ 7 ƃ6Yo64K닇%h Z~ԩYL.<ܧԸw(M7鰾UǣM~Qns o _x@o~BJ:77}G/*-*~ylr۟2uol~;P7\?/F#? 9ْB0P?"'aAQ`Ug@#B'X@MřtNM_iݎ՚;I9PULd /\CA%6bᕰ&(J,5 :fqkH}XjLN>]S]a"M3@?=w>V|^I*stp}>5ԡ*S? f4MZ򼲗>S47O#jRʴ^>fɠ{aPהziATa=ཱུu^/ H8LL2>ͧ@"K""h&T4NDDɽ账(BHpUAb@t3R]oQu>pbPC.B#DB, HMvabn'Z#=C%UZ!帘㏷F[8hƆex!Ʌ)FC1ݠrYy!zupR/qW! E"־TM|y^ޠC4}?Nŷ7\ؘWO~hsjst/avZjp`rn|s3(/-R)~|zVƸe HDk :g^pb 0q2dӁH ii??{Wq [켏a-iaeXЬI1 wGfjfuUqFhŬ"H2CX/w&lb ?L[AFjDch:?BĶ:?Wg/f'$!i&I̎?}q\I@T%"zQV~.ܓ?8^[Zp3:zkêbnQLFCuaN§ ?E>Ե=u`ZdEfNˮFk2Oרik*nn= Kojή4}7KdG%f7t"f<IfI&\9T2/f ;[ty?2ca=0Y$ͦMŌj&|guO ݏhw\;4#P$'ZNRWUZE0gt9|_fNx2Z?/f T*vJmAypDI$wooOJ.BϡۢLOfz"M&̙Angf&KZqph s6Mb @g4cRc3 ǕNcy>R9}GU/۬/t:qjGک ||dV%ˆF^P+밒I,h+0XE#[-bTsS{-\gHEjnBI$,o!DxQ),(!xcZk(Bi?xI+4h|GĔ1 &ocY60j)> 3$2}_^@L_ع:9PRK$&n^s?\}Z'b^e㏳~//<9PtҰ9l3Ds&(iIV6/>KiWb^ֆ-)gC|4pʤ,"W6M?";^)a>7F P pMr?d.$𣏕YHLJU%*5#hgd ,RA'.ɰ!p\N?/>F)oC$hBfLpAx2 !pvOXR3egCfa.f$dN-C Z >¼x7;iͬ!c$gqO~$% }"/ÐI_F\e7 Yβ|\x>pp)Kp>ފ/y6vѰ= :~v'uy:BI~;w˭72*fW&dquz{/W ~vt0 6S@w&c\K̲3L-[[ĭ`gL= 漠%_+|9u$%iÿsU&Bk# <9Z פQR-LoweΡ924_|%ɅT5l4ѼHi6k1,UƟ,>?gCNnDض_F9H92xa<(Fŭ4BT\ K!8bomb]7u0 Qу_NzTu'@iML;pFozeު<ӽPiquV?Jx ږ8+20# +$TCΔ nkltemk<RJ-ŷ fB߼C4TZ fni(q]IPJj1j$8oxWCpZ'9̳/o@eSx6"4WD2yjb\+] xg:rʥ$DI*;_IRqTZ\(tRmkƆ)!9wP_<]O'so(vA63Xo%.3J3Sbx"zF}%-ϖxvLE-wd ^Z(M5kk<؞%<@\dT8lu5v(\U¡Ju~ph+"hkZ \chxEnWĢi}]Q8oi\u (%% V`Z>f7Rшt4O+ZizU\IB+@֨=إ2 a%v.@T""0~O)J|/6$i7FR5g%U7n,}ѲG~_E@.a֞ś :(Yw2TP6<0kH.$*- S'A )sXĈ8'`{fyɱt^("j tݗ?[<^r;)"{dmS1PD@ yK`N@shqpf%N2q{׋:ڔ}E$XmnK}VUJ9tWpzI$jj X&ZWۢ5P)mLg{%P@M Q, Xsa!Pe7#u%yMMو=jOuԸU%{O %? yE*=0EYLrq!,:u7Kݬoێagᦝ'n#o#|9^}63;_0:m bz >R]{eOAS F5FfMׇ܀f|gs b#`m;ﮤYjgsT(NE_h:x e)Yð>S&e"9AG-}rA6ˡ`]QXixPFs {i$} eݴzg+MmuHGYW8=Mv0|_^l~G-J̴pp(\m*HJ$ \¶gh ! IB+2u=|sV;xJ-+"ɺVdt-S|DmbLqh'kR<4_LT!/i]uApe;/|¢{=?g(DdPN~n1 vKQ*.}:p))&q0YPH`4P=2RZ-rxk ^}H(S,2ʮ*1Y$Z( TI攅Y.sPyn-XbL8@P0 BsY+'-򁴩uʐ()C^߮1̥bGbY35@Ϟ uY ! RLnY~'HdпZ2TjxM/|%ޣ")6k׸!ocm-8z!r8h%A΂*vP? ½7Htӹ6Z@TUBn=JIOa"OiIB7RutDAC} udKۃM7WӉ 7 񛏙(;Eg2c+ssiA;*C c7+?l~:nY"ŵ\(ٻ? p 6嶨'n͒WD:~W(kAΞ#)&~ 7./X^p΋-5 .bCFp4@H|cUL# (bP?#,|\-cvo'jǵ_j5œnRhu?$zwVr#|gw@tQ[>y/u-P;RBٓ!%~g($t)I  a |O5 y$NfY҈g"$ Arc^rjV%G>wˣTWN˸Z[!)qDFeii2]4Gm3\4L8.S  u̜s""T }m&HZyJV=sXsj.77+fHil5b,ρ~!rpnKA|OA؋ 'cGҾQĸ cCjgԊhƉA iyA "Va"8/+Z26Avcl{ú EwȆL![91B9 PNI! C 0P~4:QeّJ#kв6 @. dV9jP3]=q1 JvaD]tԤqQy8K Kܐ*6I,qNk sR4ǎE /(B?Z1Gq}Yn7ȗ9>ߎFaa“tDE[K,lke]x]xժu-mR8^ 6 2g Cw0/!~q:_@/i6o8A3%*0}g 9뎸3% $txqH8G"8;zh#zw%3vaݫ|k+y:BSJ3&f%wUa]'u^o Hp`Y bΛh4'Qz>|OU0cH]Q7MTݪ(f[X5喤i4 ?۰! :oޠ@Q:$!ʿDBF?%5KiP%9aEQ Z2b NAh`M.Q[DFZAa  `v +`sbR!CHU0`)2/rGI.wܔ?~͓o' 5eȄE[>y鹻_ͧ~\p5ӃJ ʏv ESިyS柩_UXOwnrsEY=/3WW~ u|PNȃj_C&(`g% P1SCD}s?;a2ZaԖ_6<}5In ?Ca*H4xn4t<'I/X G( S hCV[6rYUGZU̻kwIOj%CPy.ɅfQZ[٣nhۣtF`wotAp:1;3ޙ.sztTܐ.e2Rc\STGTy1C0b+Q?JYŀl>>+vW9=sJ7P;]HerhNI LNt`L HD_Ţ)C\u/L婠yN0$:0ҡ? \pt#hQ3pPd&/$WPΩɱAjH̋Ds w5JSk=03g{(ey *aqtVˋ+=37iɞ'ى &5ZE/i-q2|J6鎣g؅@Re,qXg/Ҡ L ;#f;Q+R3ǸC#OPQ!WǀVAXp#.Т~ zuKhŰP1o ӽkq-16Ѩu }:y[y[3pVH:_&t{u6{=sMa#}KР#9g ?Sx1AZűiD~5Eb9XBS-7\ !+ RE.YFe[aK 0LZ(R89W9 R9BS`%DHz*2}D| PXiI7(~8);k v5JvU`ޏaqR'3&H$'%j 75j-q7߽ &ّV\xoBʏh6?u 0{9VM+?j^)rO/*Yn^SeؒpM)qswSv-I}GvuirkNڰnI6ٯ88OR11w4nE(^FصwKhwkB^n)>-wլs<ӫz&t9DZ[=psuIOGzO:^{2rjσʅ5ϙBP8C+${4B6T+&J'v4~G2x6ڰ]58ȐbOglw Vv %86/Q&s,KfH  AEzs  bl6G|[q٨X;d3EV`X~'[Ή*2'KKnk"܁e~nc0) . 4^,5AgW\ZnW+Nw*Q |Be*)SRl;+wXgP8sc T E!UP׽p AS)jSbGt&E%P* |isd1E\H GYKG 7:ҹjNr.>}OKfKzmaEĵ(𘊘 (*F]ء 'W&7ZJjuzT]ءv%%HR>8iVG)̉ øvp#A`l)e>˪+IR,]L7+?OSl/)b~-fM>[2v/ ]Lt?jj&ucETPNW7kJ0.sFEi4 ˕wjwp/ޱl6]nǴZI>/E`APo@#c0΄-"Nʌ839=4F]cƉ5%ļ#DꙺXnX5\Zgªy>`xl`_}ݗW٠/~6 yuolШYy;iZ yUCeNO8ec*k)-D,f?zl);@8KK ߂6qe.BIJq>%vnTĵt&xVkrܹ;eG =^~~܍H#:7U ƝLf{b7'θM/1%ĝ1_MubF{."P _< qBNv &[&^qa)L5ΪA?Ezuݗİ]m Nj2NzI1"J#5?-Mwj]p YU5k2o7ڷ8khݴ ~Ix[3\oL/jΫf;ok$4lC lm’Bb4~AYԢ逸pЁ^ͻ^Unu]^x5#O;5e߬`l>h2~Q,ߠ2+Q9Ւ)hClTeYZ>{c qvZ8v܁pti` rQ'^EMY(^&ʩ_!0`b{U2fO\hvꂦ, EhKf$e34 r7.e c4?ď c/1 [ kB&$O@2!"< |Ld|Qz :e6VAeVM X l6f)NltV "x5!H#\1@ѻ> ff= =[HdjILWwvK};]Ή!t Ъ*D6ZhzOd=Zi=snf!mmk؊<q8eI;/ cA7YZɶrEn$\ PZ{F*ڄ]i+f bdǭh6i${y>FnM̔J o k /6TS^:bF6vx|nڢNdJrmD1~XR 'ؗC>p|T)JњXVy 5rWb1l_ThmyԂq3Q&B03wb~ +ΥLUŅ59k)K*g3!+[tHpٔ,(zY.Uآ1tpee^НB_ yɑUGU#dMɲ!Z ^]=ɻ+u߉!Ї;~} {njOre},KKizE{%xR/7RI&_C٠Vd ;v%$SIDU5TeV+Q ٱ0Z9v3kݲ;C0ܺyϲOٰ{#s%6tbHń%fB(}GȮ^ ˮ-ˮ'fYapG:yhYT@ lN "!ѳ)dc5 RY |{A@Bn;ƴdd$}P&"]Om61!T6v!I0#3~a^r:gԬ]$2Y ((LR 8JЙ=1x|Ǝ`QvR *:'KJ(R۝C1e)tzЄ1bVDk-d1˳lձ*XAt=Bp6'TD~Q`6m ue3!#DEQ:Z걐CfItea9+ G'jnQ)hGєvb D/ޖNK{ }3SDӪ{OcQ_7CF%FJKO4ڎFEp3Nَ6ƶS@1V8?-N %ZY^?ʣG+y]9],_I'Io!U"[TX:hPfhg܂:HŃ8.2M| lBQ9H~oF ݧlY(ɞ(*,;I~X.e TC:wtL`/-pRjE ^4#=ï'ZIZG?:nQ[C.m7<`;[@x߾ 2*xogpcO/ w vL<6MczǷ>y6{/_GUB8jDpWRY77`֫o렯s޳wSZ_nm[sqjf1Xދ%O;yQK_;?6jy3C ޵l0k#"OH.QNI脫ބzPW@%Y1pPu Y!9%b-os_ CHЪ%u'YQoˢDQNfZ93WWkNw/noVBߝfCcu}vwuLlcW^^}pt?4/S>4\'~f\ #,__6`#]F{}T^@Y8gyJ:sg4 _[ںid+hEiM8ʝ-MAjJ\ : Z)J#a%J۹( KKbP R2 =kIư%0D2L٢`U酤?u)w2v[`Ť>W#V}z|}{۾37w/^o_^^_V:e^^`1Jt:cgtFx\msؐz(߯.5q||AoDT7[ǒ ŽuZ,yZ!/*_ͬ/7o_}8Kp6U5Ty JY 0^'j#aauM̸)U-˯x^9rSiMxVMEJ.Ld<&Dʓ#4 Od=ZM_MHM;y'/~9;_z"m8iGJ>qtB ү{!nR8)g2wSqڡ{%G?kwB]W±ƹTuim]`\D5E;I%K 1wRJ$!j pa/툳V~j~vlmv_-HԍVF:z䢛G9{ ' ?"ebucuy9s7E?/gMq__S=rmG*}*-_2F/~w/o>w^?&uű͛{W3卑F;Xތh&kLW*bh 7< gm$eyg~!a)1xuת]:F[u ( #>m5xq7P!N,Z|",/BVw7rm8Z XѰy/  _ѳ$Ў7/i{u&0繸4缥I0`m;*_>Q_`&x>]B@"b`BVUZr!x"2#ǰMmw.u(@ \W̕7r^㉊#SQNE㗛^{u,Ʈ{1ʾPNDzU5Ax,RCeXp.JvJJu(;whZ_ۿ绛K_oOg1 }&KZXWcۯ?52]^mXY{?;QOJe (Y9?YyE2²,췚"1qZ:b$sX'm j7^ 0b$$TvDI]5Vtv|z{~a\E$,N@S dv.MUJBz&{޵ƑbqjN*@ff*vig2YT.mTvlvJGI9 h~fwȞbe录U2hW/^55.kFLHŸ7$lCr_5d/彥٢qm(tÝ-҅kXʲ@--BPiXOBI앜0:5}BYV>C:a. 묮A[7aU%*%0Ct4ow`{mm`o(޽x?4OUL~FF`/fN=[PJ+p.$s8a663-R%U!ghVMÁo?ʅp n:aǃ̴L +Zc؍VHih1iʦk#DK+%U-L(#;(I tf6f6VfvV1Lc7"V֘tɌs~ew2/*akHAS5=6J:3G5)eN&!q9sr9R̩aNɯLYa 3cͩdC(0ē5-][ mA\&;hyIOid")in:,s{JlE-Ĝ.Wp  ts֜/. uryutYD\!oǣ_hQjwnW׷OdVGpZ]<>H0ΟG+w}aP~{,\_r,gٰw}ⷖ!_;vN=Mn㕷B)H5%SXyn'-oj=;04[XDPzI8o2RbDFƿY0nYGŹ"ηIk5t0V$pr282j4c,o)P&X|I D>nurV1~1g5 /}o.:D?X+(ioIH e8BoŻuDT7MI2JL P p71HX5R"`Şl^-I調[5)hPάQpƚ>&c:\X0@TbVl*x*.UBhmXe=SryQ=MbD6Nr/bQ{)P Y$ۖMЕi_i\J@c 0.kT2&=X]%TdRԕ5kFpѢ4 d?_wW*\ {Ad|0\z٣5^$fJ'.Af з'C-Q]P4<}1IA솵;iv߲*RЉv;՞vϿHk&-o/WĽM3uYR}D9_v_^X.X'g7_E+-g 5t~[\,WSd$*>O!xteǑ|v(`O6hq~g3 +A֌'3z8@6K>Ģ쐛7&ęI7Q_~nIAK2aRv|'Gq2/(4F>bQ^P;2r j)L:WiSc ӹFwd ̈́B~ݾ$:}%>ڕGÄ6\ ] ;ܳv%H0s(֫h$yɾՆ /2~ Tz6%[JԓdioZ.Cç룫isؠw_< Ubɧ(u|m#mTx\)J"Kng$NVK@dŅ կWE, acbTdNZ`ڞJޣ$)Ֆ6te4+ !Up|簶ۤ`yJͨdI/ ˅;]6M%ħH ]X6ӥH 5t)""Y{rw/cItAu`/3E2cHCҟH&5M$1sH=Q`aI_1!:Ts݂?vmLcAނ~?x$B/ C Kd="{Ղ)oTɔrΙ'_5rW]Wh~ξPlZp } !ѽf4˫PQj'N^.?}X^ௐ迯V s7~wW\A2<=>ڬAA;+} WMyC7ѩ~|ae}PXukXoskc z< q:W(׹$䍋hL:xKDIi4vA#Ei;m y"%S|<˃}Gv!Qɴ[~?H6qݒȢU-qQ?cSGVi\HcSָWY;p"O5bwRsg֭^?jGEp׮e0Zb3dOwEotBh@2H,H:xA21 ; 18;$iwς`HנX=IͽK6c9.'}K$νSEκW!BvhW˿Hv %.v8jfvY@ä=i 'yzS.gu]R;4,u@dJ @b;S ^ǮEAkspgyEk0P47ziRV&#Z{&0Τ=^oYO@1؛ۛAZGj)ة\ذw%̴kOCRw9h6Y*x-gEeDVTMai KJDZIj|n$d>LMo,ڶ?3(׶.ХA҈Hdxh > Ɂpސ m'5Hgk&X lH=MF!/q$ڝ(E' @Af6OR\ōDC$3atܓ!HÙ1"a$ڃdz$wFm^p|6R)gKGSi:eNxnt,6V:3n4(Τv\> :$q>k-~qIUm*M:A{G{ %j~HJfA֎9*o55+!'s,Ƨk⛕YQ_YiZȴGoV_䣛=91H4D-V18/LfwT'2'2|P:0 v}>J;$jgGMs)s羔\" Ķ^WJnzx8tJ";ݒ:dCjml()W746i,[l60b(|0*mއZe}Z@etS|n0m24CG Ѭ33ߍɉK\"_#ל_A\|ˠHX6ȅmJ.VJAd|rEr6 (b7|Ri\ 1Z}z M5;Emv&3}O ]6y?Z5+E|pp2HFZmq& Tr\NUEMjթ_XM`l }r7N?orA@̤0Iu`p(vKBD'QBR(WK^(L8%]-tQdO x A v.':ZEĶo؎0C(R TWףvUi)8vzgkV^J%c<4/3Ub_ɻn//}]^0+'ב?^\ʙsf'*)D5'ˣyY.pR[鸗0>MȤ^֝И>&8 fwh*KMÂ.yp?oLqkV[nc^ݩ jFY ZyZC]0mI B 7g$򇭮-d8IbQA81/D(R:#S몜9V&]3"JGѲm\oBR)T a(mԠʦJmWX*ȁrp7MSzwU\Be(td|&Ľ#ܔ]Je S[TQ)!ejm1~Wh=E6Y1HFTXsw30Zgŵuʌ`ituOMm4A(*XR.,Q&0Ɉau !%Gcv1 4.3%4nH$fBAR:x FBdryT@( N=0IrX1nSbK)`hݠE`tTq ڭ3 3LkVK fCJZOfwU2Z/0Ф&"5X#)~̢a6=,Dg~RJeܕQ[3Jӫ9 BG; Íe5!ˆOkO=uS9;gfgTs&qݹ ;_*@Lǝ4.z&lwv[v&+;|C75gyQ%cڠ٤+ݨ/ `͵[9I}e0^FD>Fs-o#l"W -IUW1H}ЭTF Ϊ-4g iG2QkY-x^N'M 'B؄L[3&̌'&sBbJ 5%0:`K[b+*$-V-W8c,Zbxde3IwADZ\.з ebj[$t2ĵ#G(Re`" [ 9"Q9+]Q;h[2mn;q$ NLV "s>TWYb\5fͬ)*O\_f LMacq|$njWb:~tmu=qЈˡV]⒔3rJ;hkDӸ:j0Bz9o9:&c?UБty0N70BF OS"vnImnI*=xrI| u=! g#n3ȢE&xø"m5{XhQz[%-d|6X!THCpJ4dM>X~e', xqq?.V qee7 Ve BXj囹a,^LT)1%n:Xd~J)UwzyNS*nۯ}&S6ۜqs_8NAϴ0f\q]Z9Yc=r;YҔL Qw]QY* %..T_\($BҔ\yb^d )$U~X3˭Ql,VFqA3OįzwT؎?N*֢)g>xh$"$U`iULR5Hvpُh/{),<`2e $Pvɴéq=(bhj+5W!Mb:ROTMQyYf Qr&&Д rc6=|2S0nK s5(9[.JI\yoFH:VnݵrV™?o\ieq[-'ZJnS)VaNz˕Eolh ;ڇrFzz!{]^t+Y8BE0_/x׸N "d-mv2J{dR~B: kZnZ6x< "J@ԝATSYw]L~KA#%>ŀ'6Hf2$.Sȩ,Q"ػs ¯R]fK]o.)ojeȮ6Д- %kr(xV9&S;6X{`\yܐbkR ?bBƒ>7ǤrଂE5ƤP;Eڌ"Xp FjJVM6Sjuͅ6ZK!nRV5ǢJQغrp=O:ĘjEƵ좝-[-Z~[rP a%)0uHf^ 1Pf)vYKr5q7(Jt03 kFsD^i鞔M*+C[/)*JEk5 ޸|z,Gk{ӓNE, :=,;Mb ApsLjޏa+ Tw$rbEjHcP'֌mwhp+c6XYM|B0$==xV ˕QUpoCl}%z_K:%ۮl0 [%ND(dݎlmCouj4tŲ"Jɹ1W4`&~2wpӬVOPo]hD Yi%TB1M`a{{ x@$cߵ~f(`DV./p~st> mI{<\1 j2*GȊFt4﵊3 nf&Z!0wf Ġ ðTJ3aY1BaoMi/Պ|ao7=VU@LO'-3<=IC FOٚvC޴gr}f0 :WZYfd]73O *(0:Ѿc[I.2: r&22Fbĩgċ 6\sXjϢFckpSYMT) \McchX/ % Tn0KR5/i泋>Ym֚#5gYsx*.: aβw8F] 椊Jz. V2)s+!a.Fѫ8+TTiͩ |}L2+GؿM@j6&W刋@ԭ~L1.+~ŗZlx8%Pfe; V&oƊ5}@#`;XQʻxmBhEp+-f( 6z$RɋG6)5.ZCJ[UU"_).'`I$E&%,ȴ  suBCrͣX!X|Ȱ ;aSDQPqF#1DGBؐ(3PJyU@*y|/4JN>n=Nuسu;1% {QrYp:PN>0Fyw;hT^qqȪ*qyUsBT+{R^Cin}<&)UDka^0Fojz%w' TGNٕF.yxN;%wXB b2aO=%L*{W78wvf%̍g`u:j ǣn9MƖe5)UFN< 9=DYm_=93'}2$Oʼvk3Jk ?Yn-TQۉ@,2tRWNI]:Ů 6AA\H:!=0iJj/F|DplDU@"U`!m;Ł#;̫a<ug)q  \P! )gGF>//sa i<5ɆFƹ]0|ҍNۭKVrF._VNLŗp2wR$Fj[f|L xr | Qθ̳$X/\fF{n _/|;%_4巿㱝ێ sc~䶉?>5^;G_oGbЛLyQ@5OWWXl;iN.hd)%w|A><6>ocxyӶt&ge/w!^x/x?(,p+oYx<[ޜo`F_WC`0 Iwy-օgfdl򩇵s`oƽ3 Q/N ypp0Φ}!gdfi?t`L&=SOGF.[Xچ&őnx2bo>OGcuk_s~=p mt'Xr fHε4ԵZ}Lxz:l& Kpe'?{WmE!9p"ε_|!ՖȂ$N; fn˗}WsnLR733N!-'YӿՆBi'Zhun},?`]41#w%4+ɣP*˔Ҋ9i q"4&++w ==/UrNkL2ZƠ]@Gn&j:)7GƝț)٪nwWMpEl&GAi7d>; ky|X^:K7)$)^0Ii);)qxh-4OgқvuI~Ҟޮ(&,)ik_\A7%oc!q'½h>K^10A*>|ѡml6Â%^%".t<J$2w)Y %nR9,8RXegsXp ad`nmO~g@p~YE_V/k]Se[ۗGӬ&R|\C:"/MeZ,>r6D$X l֞ qa|[ Y&oWNS8r *hrvgL:%IϢ!^sq- @0f^0i &^@/Iw`#U /ѱڍntvcnt]$< L!V$ܩ@ 2l'%x2%x0^ЯŨHrIn$ VI|DJv*WF2#u*vQbxJ*yUJ mSJN޾}{n鏸-jUA<8G6jȹ('VG9:ʉG9u:rF?uXSxU<8oXs8 }JAɁ:R$Bi-1d6$(T8j>xB.%K:5 0"RFnIAKVtN"wO%EpI*_;wlm ^B-hiJ+ǰPT"ut1S#@x0QH#H &+tfs(Y :r Hzs8KBQ&L*d&\<&wMX@N[,u'᎜W?BLbS˝.aGV'vM+c'3C*Ww *HډpK!O`&NX/HN2@L^byVt|d9Ki=ha^#Ie8x 2 tC$3^#2 Bd.;6ČsK po'r!kLM*#YJ Ő'KcT /O{(PH7:^K#%Q4K<j I,%g I $H=d%G x%L".G*#cV?BJb+E;񁸙ǢBv\AU 3SC%F0 !AHWe(+i2 '&&#vQ"kn7+AZ=Jha?`tBk/BL-NJD'!K{Pv.QvCB=t2"+tL$VրZIDm]c4zÇ9Gpӳn56_~\H.˳:_ ^ݞ/Ň>Ƈ9 s}6$`Z&. dR^Fu/y7uRL_#0 2)A؎5!ZB"kd\$p#gp!ω5䦋(E(^iH0ѤD6*X<nh;;1܎ ]thrP2JEtǤż9vx$v /s|}GTyǢ 9Gy>9'7<Zl鬛̓?y~'7_l,B;{B$-9o{7A_ ]D'S񆚰U}~r‚5לG$pw;E;'hkIe|g“=R78u =oSG ĽV{~smn5<)!А|sJ@<ݸcu;SZ#%7Ƚ#G 8sxK'{Ivw|(H)Wu`6g"8WR=?+l<:'u)| WrعN|1d󼫥. ږ8Yz\~|fdw)~d"(]N*ů7]gM麊y?X1yփp- ْwE{V`I!!; 9z&;1"|mLîFJO&ym"w=she ]Ϟ[{ 6/ꇓy/YtZ)?-&t|~ٟ^9^7?iZ/w̗W5yV ?:^][9ۏo=wo2?s*kP*KSYu蚭d?ҫi<ɠNfOɬetWxeW̘ bWS_5:_8ѿ5. wdD%2K(1'2ѩ`L;Y%҃}Lwf^vöyƃzޣ=AY M^5XӀje80Z:Z}ROjI=8U3@VJ>DX Fd-絹! yLCQu*Bg}Шcjб{𣆟M2Tc?o䮰ȮcXMN\p)SIzF4i.\\iJrf0vI-r2u:nA%_Q^%_|4r+'z~miX5Na5'eזJ1yf + 943:yA_: $E!1זstG@oR$NQe4Lo4C=|QRp)3 ΨXTL|2nlsm' #3!rP: v *<ɘ9)U.-O>PȹAMj%T"_1)fG 9Nb9]O@b)9C_g@rĩ"w):s>W6 Hw:zǀ'JJGf*vqB&ҠŇGMݚӛa_)&Nߝٶ8 X )Po&$D`t3tHa"NҋRUI|J څ*f,A'5Kc̆$lpz("-S:kON`C"0򲬙 ԭAT1kSdQ;'au:s2z!XOBDj, . =l"[5!(-['Tq]-'gdדK~7׬T&,eEy}\co^Z>:ʐ!}X|PfeQ^5 頾ToְCo"vߨ#ajQ덹 NOqaU-fI*P~>.뎏[8;rD#?NEU\v# N+5"UmK8{wV m-؉ r~z$̌m.0E!NMu#"-=,dȇ۝ euc{m9b[l˭DI AK*U&53kPj[ EwG+ Z/)ڵjo ]OYTa[a,r[0K7؄cFcFZc¬B憪hm 31ai9`o4li ]#Ț&c')|_ OTw3F3ua.V:.2FoD &](Q"p:!�`4Z,r) UG3Fhv > qӯn@-I4ҽr)&ܡS.cMWw(-QmMP0t$[jDv*Kd1:M H6^(^~߶!h ZE6w-.ӊO=hݶ7|l<7F x"7Q$zbx+a cF{[Arۣҷ~?je,ђliE<﬍!i-HHjPÐr?^Z:-JXD.U;4 C vw[__jt=id|9C.(eNB _5`s%d*C2X FǩJƬfeQG_0ƁkK:'8WvV^* #\ aZ~B$>1C]g Tf[u͞}*FYaHQ P{ N_0caIDܽ?Տ`v0E!6,w M*6a⣍CU)(嫌E )ytz]YX$j{U=A$vݱE٨&dWMJ˓QbehwsGy5k] wwĢE3IQu#6V˟v̊3f4L!APčzWTɁ^eKFȹMzZ{Y_z=0E@6nD=u=25g|f0 `Z i߾=.A-W憄2Ѱ( b)@vn'5T0Gc$vR5vR [Gā7f,SFw =KU@ ieDGd{}RG`qr6/27UiޥW'g[`r(o$~?0ٲX«:Vygzq$mb.ίG.ًWųoṈt,<]/ uV|VBVi_myy%ܫ{^P45wUplնtc!A;u}t7oru}:_'9L c8]B_d?ZVUż(H-7fu=٬{GտnIw~ŖTBpEaN1QP%X%tD"ZU,b<W\'qod~56S7N~>9=m](.M9Is&jpΖfͫVeOVُW~pse%j{0 ̞_ p.3[?}6y,ѫm LCyw2y.'`eCM4x;ۓ&Iwy$:XkV;mY> $j!h:'nUC% |J@MbQ lq[7+TU`4˻[%_">ep0EqdMdQH-(X):dh ;::M_8t>t>t>t>b)`cPuEء-!N*]}U QygLY,XvOu ;.1F6cd|z`^b',7$Kcp_pzp\l79ޱ\ӯ3Q ?L-/A DuY#F2J&hO{X@ :'&DQ!Ct}M"Wz{v2D ޶'y$eK蓷6L%SJVlm ɿrid'?Wim2ۇv}krnz!whc%c$C@MRƈD3idҰOBlhŞD{E#kӛ^'qCߋJI}={)EN@u)1{IJ`(`kI<'(ELdqm$vC^K:iCp0z}r2{U摰ؿ~j0#;Z)a+]NЩ \vưF{wt|YKwm2#z޿$//0f9{͠w=N&z\j>L{e浭aBA7wSlRhVbT@Q"bI"u`p&z0Ii)~4sCpvSZ>>֔[[3Cе§@'zWۺgwFi 6 ذұ,Df9&QKYom7hDO/k_3k+G/~..ᯢDXQ8Uи7ƹ]U N1SQ/2om6j5&LV vsEҾM5}:2bv<ׇi!1ޫuYj %b|F]^\~ [k^d"OnHo6j(ؽў %u䴗F z>w}iVD|:1glaG[j%ߡĢ}gB&)Qm5TB JT,i}iX-D{JN :_@GJ5bEVi̲t ,º8b\HI)iJ鏮[jA> /f.]|@\Z uV_8ej,S_vCocL9VsPlNZ B*pLj*XӕDI=᳅"slCw15x$MXX}kW~:M#oژ1k؋j.ǍXύ[ ,;1 dziq H~e 9Jr *c*GQ)B)}3g6Ǘ &RD28an=%1F"Vc ޘD҆e J>EV}lL%&xL'\<oc`Pbnnjxh۹GI9n+[S@co_Ur۽^JŧDmI_@O 9iȉ4 pfSS͉,J:jw` ZM7䱽@a v^IN5UnvnT]ƾ 0{#`ri]W_A&(.UޕzF>evӌBTM)Λʕ$+eS%hShd'iP0쳯aTw÷;OA*m_c ΩUil}0e@W&iLۇ(菸S4E.)JDS/_c Dѻ=3|wdaJw<faQG )U}hvQ$zhBOʍhGQÏ|m }!-:[O}yWu|.B?Go_?g( MYdycLbΣԘ.Cen YU\"XA$ciu34w8ӳDz^"/ܭEe3lr[4 -0{[{ŋO" p 1$FTc7q wMrJzx WAg_Ӛzl\(ٞa&&F{;gr*DeP򨳜Hl9'VΛvwi]q>v'%=YР(f儁($i63(GE^@Q{TJMjc8>=?]wߵ?~מh,ܵ9 gUY hH\6PRke- m sAL:x ؍FMhl裧x79D>n?y"MK\r>>xrq%sSV7yNP7Z,*'.Tk' ҺW#kF.Ͼ\~ЗIcm4FW4 *[!SW%r}2!#2TMZnLvH4Fb!)i?teEqz7"8ӳDzeXhsIZRx+MwdC\q*___a!#R+;y)5Ma7 [^}` +|fԅRIJ y>@b|>Fvˆ֋h~DXT3d8}R@/O7|JR}ФVuy̾;,$ӗE*JNi}}tt~s=R)(-փIÈ zVM<S  C`'SJSvMko]{mĭ2ו(rUWjlܐ3#5-!+%T lk%>vQYc7?"' sKSE}YE)RePs.83Sum_=r'Bt'"fq'{'K/p'#bD bk,Kb {$^)id>{G}qSonB_Bk_} ۅEYF.lܔh Dt S5@ mbE^af(*x:؍FM'Ii+ p7nN2d0 !5Kc~\3)2"b8J{:(>`) bNqGEbi/tP?.?Y ʼn*=R ?V&i寏cd I_n '̝qWSL&/7upJE9sz׵>T&])8C- dN ts,(,H?>A .t,^ `PxyUk/V\HLdԹâb C^Nu>%=KZ;؍-n=a%*__mpE4ݏvfv(U{m䀑Mn[ޠm~1)U <[G88IV[Jn-SR<!DMgp9z+訷̂3Wzw4ĮTQIoA^X]9=`[i$N߂ȃt6IWnhCpHHBhܛS:>;{t (OD==ICe4]nDA$Nt/C޲xoؔ4rItML`.9@dhulDf+aA9Já]%iPcl{\ .sZҽrd,Ęt!`4t3 ۤO))IW{= 4lt9J5((9V`A LVP:ˬ(JX.D{!vj(С/~"1Mn#;+pݡ>ewAw.?sI\0Wcz6@6+!L{kAֶA> aO: uḫ傜!PK0yٿ,^L9GFĎoZod%M_S*mE'R*t7hf)c$eemEj;i>d&"[ţ'N~#趨{Z : 6<{%Pi)/?5UCP56ZgȔ(upYb | ٢SpH$=و }\0B@jվ߃]55eg+4(g3[uc_11$m!+l5B9bn0/܍T Te@, n4j>Z}orlݯ kca۟M* 8RW#u?||jɾx|f l{tT>&[9tsӻmm7(if26K)'<H2\ 1#;u^T̨2Ԫ Ίc*.%A T`U|X|%r:} %gubgZ{F_Ev20~?C6 6wb$b4+jReٲPeGd]<]uH.hcNG!ZV u͇9o:M~ dx6tG*CΕ.!V(kҫlT#:]?Q-饖(v;.$VxMΩ#]DZCWSY"0-Ə6S8ƉY,0)RrD$\^W1-z^ n#Ƅp iWOa(,F4O"?u& 4YnWEnN+-A9#YSJÉݞb + yag4'@wa8<\6C\]p4JzF(~nSs^Lo61 ;p8z~NG~/+hdYp5=H0kdf}0A0k4`g//^?-m.w_ o>`ߚq$o]ndYzFϰt0rV{7/p s8! ;G(|'WS:|z~~:zs84ØEv=:?`G]]@(Nq'y6{3l=}?U{=ݤv{g /Z{_^z霆fOpb-!X˶W-dKHR"l;8l1➷V-|;a̵4-ZpI/[L[\eϿlڝB܉%cZ>+/(/~ibۢ޳~ʚыG |@ ą̌Մ!8:`Av߅*sr$ͤ徲wjǷy׻= ]śv=:E: cݤŎO וLm^L.+b^9x4([x-?0''-A(~{`0騄X$(3?bѳAR=y?+_E貊=*\h`xcB816?S>|X&O[kU]Z r-ʵ'!6䚛Wɋ"ybc"H)OZ.UʔE$ r^qJQHyx Ġ(168'"^M8omqW~6\ۀk DYoĻ|*RHh@ԩЙdN8Q"KO(F\U49 //k#Ŷ<[ߊ)Z2Is\V)Z8o6Y.S.v^iVλ;i ZO^+32; f!&HkA8FHF0Bϔ2+ ; ǧ>Ʊa̪3-|tPBp<; ]Wz1*MN+2kՑ**婻EhZŹbyr,/\53r8L (HM{y(d3!j4H2%9N9N9N9~ ngX)uRAcத@04Q#&10MS#v9# &+OhoͲ=?^x_db %&9:{TTIoڝόQq=7qImKG&y &Z =\q;EvӶ㮓+q;R9o{)j #|.h殠X13ıR3s( (5e`>`Jo %s pIBqYyܭw_ʤbz5nwY@4.0~ @-QTT_M/qǨrSrEVSJrj"j&-ޕr9 yL7s6Ę"xp9' 9X^pQձG 'x0 b!R4t/"G4TpqA\f^{. 3';P1R9qu&:b`RQ"Q%Vx帍"Qa3TNDjuJ_ER"_c[-9J)ڃO6Pj5|\$A@x- xD!X/9k{ކU\_ի `|Hm }""YH!e l 倫8PO<Ҧre.1-7cd,N%P/(%*Qpq a`yX2DRR 5 lUV k*ď0YA0`I@V@?h& TE^:U޵G~]8PKϝ8Wwz[:2:Y/{HCX",{L2McZrf,bK[Rbe)v&$VXxzn> =TyʑbE#u['W3xMR>"u⹸gPcw͊7^x?/.Zf/BCWFPE,pɴD`.5,Qd:tA kBQjRרz5&@hfPKv$|b@pJDhW)6Br>4Vg6gїC(Xײe6 6 4)L|}t $*~D,;|%["'IK{S I$YOQ4eq$$.MRmN(V-3":+B3V`Cg}Gl5ìvB=(`)$.  Y@Yk wd:ju&tLhIzAo/?Ey{wr-CB}@bIl0{vډ\ }hlK vp9e[z6kVf8ȕ}E"Wd&+1 &lP0]z`ca_Ӗ{DXBQ]E F % zP=e-j!h e-9-Ahv, =˺l!Vo12߹0 ?2ڭtdrņ-eP@ʬ&l C߁ 'a Z΁H.T6%3dƊBKi46o O{ƶ8Ld)b*V<46 JR'SM_UK6V֓ ڬ H{R͟߼iWqr1U*t l^ON:Zr~KdE2^l)s}4/ ^U7M>HnxkwѓD,ImQJ]H[uaD(]JgxIP&uՅH'=6JJ l*/lO˦-]IƆ?QPVWrΉl޿d*qɥ6c#)Gn0(kcisbU cjG0ﺖř s?^EAbtY6.o 6޳Bk c13<4~ǟǔN?94g&# |7ag>5^J!?Ɲt8ղq>ѳ]i'KZ_{ev8=ZC\u}\y:pJu]~EK2r(s{"Kf6>/֧q#Lou;%R"PMC<#9$&]j YIC~yI(}05g1(+d<ص!RU7_ܼ)`J":v-pA T1H Tmk=, 7{XRo9kR%gh0AZCs6j%W9;a. XoEA`Q5=N"8|0PW Q~Yo!P[7],\CUn?7 ~-8!7W]6o wx3us,t,{O_//W'w2z^F<>ғ1i!tt7Iӑ^-s!a/|j" px6j˚xLݣҳfCŞ^: #kjꯞ&fB^z;ͷ, CoK9꾺]^Jݤi/>wO_Ds;o `:nt V1_%iݝTnqKPNlډN.t~726t"T+3K4.R6_~v]~NdZu/u7|/O>hRlccF[fv53Ɯ v9ޱ+˹GQ_Xe~ԃ[ity_e9~/n[ńJzǘ !BK,J6УSFm]WH Wڤ>CweŰKfP/aF`k~yZZmwPR7Գ ,uC=wC6Pte*4@ SquWG9\XpX|j:Quؒ=h~*7o}s^גs*0b'be)Z]-ȲXh4+U]c-7xcv9m_snzjJCU^NG{;jnY ^;Ϋryu/t7MsѵaOO[7c(5KPv\sc# $'3'+xZϤ}#5zʠ6!@ K]z`caRro)ivupbE %dnW;lil„]v,}̟WԖ>rWq RY8WA,9*܆Rt% ja4J044\q)VwT:NQ8b,#Oz %skO/wgi#,mDFD]l"@_'ϫC j(D&39-x@5buNbuV"߰/'q{[/ 1 sQz|i\;IHdƫ'ZQpfV?~p"}g?E,zQe=ۋjNFOpT !Yǵ!⓱ZZm-چIւ7r٫ԮK;p7i 1eS[ϣ%<>a %~CӜӮqi+ȁ}X-?)?H/cS\|>%K{YۚY4!EaLQjX>DG1gTPr"JԖ;ďgINYǴ|XÙw+7ΰֱފi %6,ğB]:s]Qh(}2 QbiL;; W 픺}{wZB?.{ 2^kteQ@*>9G3ÃU*N_aCI ad~oEs &AO'juSª+aɊQ$k\#s%8g,I& rn=ߦǠA .e5Z[txd%C1fdeFsHv1)Z4&y Y[~^/NnCӴg/w{}]?f7.u\ gT7 ?xznm-dݍp Uߊo>?=0xMjH^\&ԘC}^QRJ9=O=bUn:na ssKz>@[b5*Gߐ͑4d~X.{{#i=A ڐK(w?=ﰽ]˺|r]u5vlQsOZhqNOn<3 9v!NG{5.pn?"k3Z@뛹H1<Ы4xG6NgOG7i{BZm#E/pUrNjj/.g%)Hlf"i8[߯AAA(\e{F!n4nf4pɺB8 ,Go$AUvNP ZՋ~ {^ʰ!y#`jm5F ,(A+wjrSw qCE⛏7԰Jr? \2O:?L7ZJ|:MYB>2ۉ!b"ZF&|aL6lNDv`_>iFy&hHID@6.6dE2 "jeq"EFH9%lCRT'sIz>L0&'auV\2d>9|Q#kejL:aDKFd&7*90rQue ww)ph2u$a$ -|ɐ#F©!Np1336#WWwq,ǡk'HoFq1(x5hBwp( jcne=3sAvnnM? ֗mQL?M[qh9pyR|7++^_QԞ?h 8D>e}tfQ]f }g }ϰShL!@(0ωTBj04eH[":()`s};PTϖi5'.BlgC+qqՃǪɨ97;$8'cT0 8LS12. /`(Jc?[}&5rLmdP]^&ЛCo :U 3{˄ k cId.,Mu{؜`*e8v(lD @%Zq*Ezi>Лê^jCRX&xΉ| ].Ui.910R,2\cC CJ+*$_|(թsD H3P10 D#:|UܡcU r(1o.VN5Fi;q^&ՎB12$oy ǹ@P[ jS7'}JQvhKZێz }8[xx!5e$$8Z8 M3^}o+U?_,1h)Xk@&e!OV:@L2ٯ(YǧC貒W;Kgs`Vqrq7dM oOG`xݠ|? `!ƚY:8Owd f9*%SݲnqQ^$E#l?@㍰&{n@im9}ZSMOA]i]yzqa2<"0!Qޚq( EHcҝxzm(] ]APTr3ߠ Ι E"L.zȓ5+NmRԉcQ#労gyt)+a;a=%j{\qQ/AZ9՝>ªXw. m crP':x֦ZS0:ϗ`[|zae.lCmmuL:K54]pl@X@Zr)+~`4ouLN^!3Mq#ky/_ _)h+)SV L3%BpI EEr%UJǹŒ+YDtlfP|Xgu>ՂWKTlB)+t-8V/ڰOgw`jzwg,mX@c3&-"GylZ=°j{Ƨ̀:`/OKZ}vp(T(u]XxrP mE84NJ~'V(z-0k0}:7yfZVda#2YBS{>qUUWmRMuZO/TWFt1+H#,׃{wbY7{pIǶ|uY}4&JiOE{Z0Rl_@Q|"|,'_5yVg.Q2%huvDD[,!;Fv}VJ m-zڭ y"zL&l{ w5J5HcTsJ,*-p.߸ly}?6.S[1xs_SDbi6 igJ;Cq$ n\;PTϖb&2uk)[MJh[ZԳjM7Jo?b MD45?|1}1@S5L BbVI02 ib%HBgڷú,°Aw}[,(JӰbe1V?cO6 J£h]zmT_o||?>SMJx6,pէ?>Gă>0F!'BglsPfpB;,X:yVZxu D +,Ӊq&yRyrlmI NqoÀƌ~htF}Y,80ƕ;ݜB?ο&v4 C|嵀U$3I ؝~VTa{p{ge~+d5dž5/ GNOW7Wt:gUzH%!9;UHDw\;Mj*>AѾ}.+ӝ;x>ڧdHmf_퇜S'8}GU-{.Ʒ )Vi uyҜ568J%(ʟr"`O,Wk5MAX_݌"Ҹ|dٯ pShQR.]ruWGi툩&d?^/vI..P $L_V?# bޜؕ)!'0 "ſJ $DׅZ|(Vp7(ӑq܋/l0DDpIJS= ?fp/I~HAnyw0G~>ދKUg{5-1lZtCh3TLg3$j2.k[\f\hִ͠ \ffJrDcJc? R I?߷^*mˏl FFdWĒӶIk)k;)j5a-cN1xҫ'v45McA/w T8EwqL'3JTau+t+d M\U.r_@_|E2ru ZmMMX9lݻ;`!wYHzl8)iY>tU D"G0qqzyVGvv(rXa|¥`men865no iH*5yfH` 1\(B\a9q_ DS,\ʝ>>ѳ}/?Pg:#҆o6ޔ"'[1'g(2aS k[TfId.,Mі6'8yNg3j^®AF:1ؗ_I>_I>V( Ei:; i(3Kq|Me)N5G׃@k<`v:owgrC#SۮmP~ U;j'htl|!E$2^&ӄL<ɸ9$+WH"0lQqTA }J{ވ]|DžHAHs/ymMNh'nG`ׄ] p)"pfAIɎ#|#@L7V-djch.M;vֳ͟ Ch%G23'J*nZԷ͈H&ֺɱ7w3VFZC5bzݿRܟ|ze|qQ=EձQ鲠R,~tj3m5(C /~禂C沣%u1mx]hDkh{,|16^_e$xO>-;b[X[o)J8 ɭ;ul!RyPb0"3K,B2$gt Q(E GQG 1RPa0@X lLi6"Y!4.n)F]v>/ߑQ-Y~POxvg MW7^{uWxz5YUHP[#iBЭܑJG)LrE ^Z@8ms>wE\u]a5'n?F@~|W&G8 /eco׸~bczɢQ-'7yzOq_+7NWyJ;5ce&lq:Oj7bK/3!o\} >Ze񢂂@7 ,rPs&sw ed|xZl5/-%/r1^]84`!Cc/;d+T5n]鱫Gδbnj2&c8V*Y9PqGwq>>{f\|}g ~2]{Wk ̼F(\ "zuҬDȻwn1߹_nE?7Q(C*}1w>>tw"%# 8>Jp"9Qz[ӇǑdZsRGs^,~s=&QgtE7m Շ%-vziU) ?b_B;R Q:%:ZYY(ȶ|Ɖv O[d-,;/?IP!II,07W+#ِ+o8o-j(̚dž3BqW|54ŷ vKC[}~Mq6Tty^zÓߎ%_Rge1~q0u_C@?Ӯ?Pz͍巻 hQ ʸRTmb&FŭBIlB8wEY&`'' erTЦGM4Ywp@HcT_:lP٬a0>٢78d̀ș3/AZ:4g# Biqbז>QxPZ82T 'OJ2GuU 2EyER 0ڴLp90DY}Omi+Cm-2.7F kih7UW:w(8(׆UH_%xeE*^ 8˭*X; eEg,o&= EOŵzܘ$GA8 'P>ŧA`XI@QTDMfӍ!i6$8ӍnԐ9a(eIE=[q܀h_ҢϹ8 'rgâciS 7G KZ!?@d_"#(Z4t7 Z|vعdV|EPR# Q, :?{faCf!p؟r^כUYpכӽ3s3mu(H{n0GOa7?GT+n.`. ɡN!Տm:b_ J|wQF/_ wRx}rrB:_azi<[@N]Dl([| ' Oq=m. ]Hw&GOչȌ9~sćM|ćMawK^䄉h]BlE%r4TKQrAAVrZTxW@i^.BFST j$ݚgy *(Ar,)y%D#TVEA)k `VrMq`LiJ[HN'J 59KP(,^s*R8sKAI]!o1h"H{^ d '٩䖨qS>]4ڦڗ&DE4toM'Ujh c-5'ƚ3 ev5J=N$2gRnc:ZJqM7+_*7BR{E[W#_׎T`YMi\IM%Jύ| fɃktʊ ųQ%!oݷf*'|i=+peFfןnQ dBšUv,·JvV.(T(mIOhr W&Փz!~s dڨ1N!Ϡ{U3*`e]T:X'%WOzԳ˟r%d"c82{+ ujt1z!WA[!C)rD"cLcuIkOǁg;ןG4sMA5ew/uJbZn frKg)~=nJY DU9LX(B|5s^׹'vyj]$7$p ~=𬢀iVDUjsԦƫW :wy -(V-]UqEm^qJr*͉kx%~vEW409JQґPðMIA TһkI_AO_'p(Ar"5HpCyYNJM.M^ [i@ nΠ%њD+~x8Td~M(cBjjP"-ȝqn+M :AO'fUQhaƵfsr)U;uuUo*tlPY~K͕i& ]RR3eTR&0F݊hZꀈ1Y}% 硥jPZQ *,$FwKFe)R)U/[Ը% E2yǂVwcLIV^j)8E1$h'T#hBl E'ђ@w;o^$%.hf!p[/UqTR5z9I<YWK,>`~~Ȟ_fikq ]ЅW2/o2Bz }H-AldoC#.Ѓ2p!D&m2zmv%phɜMb8*%U{bܖbpg-Ƥʥ{gn``|Zۣ)k+<ŤQ^݆ (p[ka?8o:>MovSIw  9pSxTaJ{y*Kdۭ΁o[\;qr&TXNhI8E ¯dyΛz)kS9obACDT;"|`:L*YҜC΄YyH"rJYB |WYWK FJjXH5S=vZ=k:uJߔ>=_{xpT\Oc#mP^]\~˛r1Wr\2yòSvM> &'\rB.ԪcY⓻rl <&&~ALKw\p. @ÜN8$i|zdq\0vSQ?6fc>#R#XY1= nZWvX#?y&RB5r)ϵ8&hFC3'Mq겏oJ8k?8W=1W&+!C"#9$e[xއfZ:dU:\d57*/&yU9'Ih1JDuZ++fh0< ǂq \t7n @-/35rJeoU+;/l^6>!uS6|K֢̀YK&hzvR\aru4%I} <Ad5@YMZ;\Z~傎O4Eaz|xzp,Ol<5/&=gz6Z|R??ty_ f޵q$Bem/ēc,o"G\H:48UWUWWWL:(څ޺ iB(O\1}H4SYbn[N1QbyJg mJr eqP]!ZDppR{YVqrF&)#<`f^LnWaQ$ey|iIvx o-g^^k ViqlW[*n܇mf/w_|yݝlY93=:Yκ-G]^"=<!799)K^Д3U6YK„V$֛)|Y-ߑqӿ~e=U!5tۯs1ND\Z0ܾpa8'iDJ K.|)sXd9;>/yݑQir]m'W_桌,ƾ e [l +{'V[wUyܼbXYו& PTں'Gyzqh?Ϯտ|zpwQ&b.Y֗|lp_r5–j3~+=u?G0ow`+ҕ>ߐ%sHfi,*`Zp:9rk@= KƝ2v[iCqUs6SF%U}*ꚮ&?=&§S==1{z`8lo^x'5{h!}2|OJ; $$'WOh~I""p^ׅχ+ݑĂ&pR nM˚V-(qG,ܤU#,i:s1qyhndWH4*_)]$~3L)t2L)t\0]T]avKb VHL TBaLs=SF': W£?|>Ay3Zo㲶CIw/M(GsfE9h)T29o!LU_>Ʃ8U {j|HC dy<&&z7FBS[0LpE6Z@/#Q)+4>ʩe2vu'8F`#`L0,jԊkaȳ)gFaUp]C߭}qq(kWAQ[/U0xnd N**[6c!YmBy!7Nq*9s_Nj9f]69!//LQ!Lo<ׯyJFbbqҝAD!3wW3+48 >EwЛo;S݊a?Hj* ݊;_=&_ ]fs\NVr;$Բ9n-W漡9nɃ=b#6{F,` &s)Q%&X++>B-9k ]v=v4oӊBu:AF+Y~{A͡zpJC]Re=\JEAW Q1NӃ.`#&4A KLF̡DfL@o,aRQrG/4`BGv{ٵ g rn!Oŀ2G)9 I9ܕK&R3-s9b=l=lPC+scL,h !Z f0(h`Ej8zaPQk00#vk'ߊ27؄Z.,g2J)A8c^-۰I1XO2}6+~PWoZ,=E!?IJ+{ ~lL֡Z {)CvjU8\vj݅3 _sݱ@'mxtpwp *Op܍W,Y| ܧ,(>1၄cӃc!d?<Ү7.El}+l{e]IB{>!;5ܐ9,9D4C{Mha\NqaX?)+W ljN+vNM>A*erםM~(|\idfsv~^U/͚vVl! 0LƷvy%e!?z><IVvܴF؜؄X%{r7ZErgJJcu8#*``w9-r۩BϵNdsEnSnK$x_LwOwO_˭H/ e`*B+ =6Q[.jXqrz3ή=Ȑ:<; = &Dm׮hy6vv6 uoLڜ{/7y>#s/g·Pim c,j>hI%gݙ[.zfB^~4=q1NvܾlΣXsSJ})P;9|8u*@KPv,>cT{w;4wL_6p0`* B6^=JkibX֪[Vyj3pE;g!ZޙRyv[!֝;}ʆlorA~k9O +lb Hv\;b1t>=]Q>9㾮|!gB/v,D+K$/ˑ*B)nƯ;PA);Q:’ Gug3a.5c8!hGnׯ_>M} K .g/DSqi3gRySoY+FFhDLazq#z^lA#|Jt8g([jc:%x#AYxEŅ`&>B |7hؿ8* Fh\|.܄1CL[aJ]nL.(iMN1VM?6/bP5ݹҋ*)3'Ja\HQlu_F,~[|Y6*i#pâ|*`Dm(fMa/)*(o]D= Noku;tY-9-](8N% evG=O4[;_8SXP-o?KL8>%*0 KJU{}YEOp, jg cO=Si%-3L^rcZqkuy*EšfuC"vbYtHxE*bΈ&*<>H+)LpA~nRW, {m6TJ`,*8#53" Ceip)I>9ϥҬJ ε$8.p%6B3CPY8$GK e,p&Z9촾G@;KsT%Yrŏwp~#!K-ՌqIaۊ)||QEYA"/0Ҵ<1"PJtșFVB_0]aYI5vLdlf*3D&mw.>,;_~a6uT37>PIq^"m1Ј*|nprA.6a'g.NFT"}2VM zEuϣЌ`_EgR̯,ɎãB"yniƩ]}>Dz E  -BN|L͹ ^[X#$ǢR CTF47{؋)VZlV2 5#o}똡`TZD$46D`b[IA0 g޶B_clKlJOSO]≑O8M>%7SmNV]^:& "OQD#yi%pz@ބh6'ww>EX:y8lRe*HJH֊TI5"HQJ+<yΜ 1 4 "ǥ7QnHFeWxH'ݢF6zqX瘿6!)R/ l1Xp'D cT(9h 2zaR+\h1D9/Q1A< ^`CQ٠{; 1D`yۭ&`ubrB0V cN 0[0;J9  ؆nvNNr=m9)Hiİ1%S8801"YR+RdvkNs$kާ9Sz^0 K=ޏ-:^M*'쒜Tm윸zzq6f ǿ~[\HcȇǮޥ'(Q \TCe~(b%GxN'Wi_RG)V35ewVq6hKVj=}-5Ơ԰j,ܧ_bӮN:\tNA,w=4ߩߎ9I)(oo,6O|Rp)l?7Fb+l*}Aϣt5ߥ%+dޠQyQaMa:q!ZRSq:Sq:WOWop]N.*@Ozu 3_+{|(lq.4. >mlٕdًw(9rcQ$7ΐ!G)k8SV+xmʙouy1x7̜0^rJ&+0Gpl40MgcBm/^ \cét8u.ep.mAd䜋(Bp15g!Q:P|^ RJ^>/SeLl|@G:͌<L9@"tG>fNB>f`&RO1t0˒( Z(acuRl;עߖb,;Yh,"Ԕ:50*TqY(+'GMhm:N]:}Rp#79+ruQbu@kP "E#$)XX!׎(.;%) ]T h$ IN5[^9RVD'V f굢kEN6nY)[}Lerj/d(?&3qQ//s2͟<&KseТ/lxCuwwr2횼 G_4SKMYhox!`ٮc ^B`oo6 kWda<mps^ij9yr>hu fp>xuq8;aae faz[xQBmtmQN=8zZ&W!кg//8yr6*/8Rw)[Q׉`tg{.7OB!;!9!8Sџ϶[n7OR&Tޫo֙ B%yk!`˩>A`'R"f:C GKMv'+;ԕ٩no " p٭(n"cj1u+ĩL9bQ٧x:qM9Y9_y&@ DۼQ[8b~Pd~-bƽ4`!eفu<)2rn ^޴Vabpo0Ŕ>J]F𝕃oYteZcLZIg{z k^Q8x6[9G(ۅh!w0^Xh CD#_$ne~/G eYGr1ju"}%,2{;#ҟ76Ty|*Q*?`$w]pxKH$qoy/׹](zVVt0ooy#{oy޺wy-Sy4st݃|Zq GwJ9:o{ysq n}W3"i))8G<{o_v I ^< AʹAbzc1Ub Sudr)k=f=ڎc:~Evvvӗds nCM:_nLd:MGӌ^=\= ?"Uq}`ΜkJ.o6ɯ~˻>wJVrWD7^YG״6|?:ts 2PTb\Ag(RO ڭf`ٙ#"*X,^m`JﺅKȰwwOEcJ-yTָY/'/(RV^hH)W\'OBӟ;ǜ^qO)gQ1*Tp t*N>fZ`q钳ܔAO@V@FqŏDVDJCΛmrR@8,HNk b9 >wjmRnhlT(x$U+}?jEa !c@' nzJeb"Bގ:UTM3Ld_}Z}T fkͫ4h"|rk_aI1*l>]tv.mxxSy1mc@ #☋;fA AGN<`P-p(%j>п@^ ލ7"TxaP: ] JVc?@ロL?ڠ_}{KBD#L!!n$et2t~.FFK J/0XuT8`#3SΖҁt#qmD΅$ٺda XS٭ѧ'6@&dWZ>C$.M^GػqMo羚~SU?M^a" O\M?bj0u#!? a&yrV77Y'xUȋ3M<Tkg[ {5O k*+}FpDE8'*ly1ٟ 8Yٟ TЏ&f{$N?tQjaV v5̕s|+,]X[%AL gZt|sRڞKގLpFrE5xP9W W+SY𻪀PDzv;V>3xM iF0"w_it}l&R[qLϊ3o'a|Ķ46[: [F]etlѽ_m.(.X\^3<N/؞ 0%A]yZ|E-i틧BHwreu]ZDtrMn^2Efp!N@RP~_'ScbԗOWv%+,4&AAWkzɑ5Jk0Ƒ2iơ(01 DA`ox06ᔭ5qN%GXRB#sA]e,OTDs|e)N 0/ Q ")˜){KA,rBQ$l#A%BjaFc;B6ZxK4j-gļT#FWGX$`g29o^[ 95FY2 GTx!%Q2B , +B_~yu^B#r:6L<#@4wF3X+B~ữS&# wۇlŒ1$Ro_ƻ}}*)_37g8O >nOƃ?}{p2HI"NWf2ۢnɖ7INx?yN,n4aKVWSş9Ō,NةC[z09~VaphPo45x _0(e ,%uca)) U9H1oW^^&Kf0xQk`;W5wRŧTŽ~prXolq)Ԗo5X&eNDuW:e˫S}uj<5 |Zv ]Mf(Q2z<14\(h5LA[ ڌjelCX$;M5Dx`9{Ô 3'-7YkUzèh ؂Joe%z&8`%Agt(@O3'9 Bc%J^ҡ[pc)hJ6;p抵P{D\c!AZLQ@c963V}*r.hVe;~i0QYQ.9~M}RGKL9ȧ-'`ŷsb>ڝ~|ת[`L6Z!\[,>[ `^Y_ Y v5e qA{Kz<^ӏαTbϒVV낙jMq7~YVJUj7/_&UoWB,R%whx /*^f5Qr1s5/({9洿[ֽsr4ڪ܀޵ƍ+md-@fb.]cg|TwVwKݤDRydzִ? dΐAUL @"IIQ T#\ <[fLsAͭ{ Р^kC?n??}~PR &hރ)R7Y8땽iSǞұcqЂh?!qiv7& 9 r{M,ףwvDW]$tBtX'ov1|%k.% us̆pGQ1N`(N'sN\#)s$5JIy ARV3Iy@U ;"ʑ!MɊu2=4qDNyQka ݠ)9ElLB P8fd*'O9+Ԣ,SR#JE,;wBA )GCo_v]?g_Wn\Ih&m2+q9 z`MS]^'??{LQY7L"Se*h` %ˈBVEXTg̖Zc))^n\*7lks^"5b?fFZ 0F#'!5Po $DJb$c v)Ȅ&t|b% TDB|BEpDSN[_Z ;ynޚƫ̈́)M5f?>\m^ϝ݃F/jc~|#}.wUC]m0tX)xEww\Or2B~tFcnz(~*7T؜{;:-6L9jhQt.J+AZO%3Xn_Q7-H&曥+a(g>(pD$/onhf,]KsDx[Fp+O@TI$錱 0#$aR"%ޗ'M D.LKf10 BQTUr2˲4gEd,/4sVPe uo:2$8u)'R?ʮ +<)/KHf9$)uRy-N:uwnq52l>Unw7 S[.U7μդƳ7:DD"_66W -Rz|;BjEsaLRy%R¼:Z ЁF\+%> ưoC]| ݸ/pP>%00rf}-Gͻu.5\P*>pfߙ/f9We_Z*Xplђጄbu9R_R%un֩B!/yj2rMc&'MG{pP:t8cˏT˳"fwVr{ d Z;W/{; oL®u%VaOTq:EVaOV'6& Lūt→ZWoܻ@m+ݷbtwШًTDBvƶ=|W;IS#ޕۋ (dRM@u3(USS]wc1ւhE=pHK8uީʼnyqֈ߿)g*Si^d@QBfV.. P]s%TfZbQn0N*L}9)H^2 fm)AjTg$U!.ӜɒfsUy D"edOyu" )9 % %z.sN}ڼS[^_ Se^ERx#$e' \ w9ARꅟsQv)z8vQrD^7ѕ.&{ B#9z.P_zֺcb#:~W6+aNR'}<1 BjX)5n,j]q\ޚo"-.◻Q?l4//7Z'O>m~L#ηiO54,+f_./i?M.f}??{5{tv&U}2ņo@sj^<\ӬД$0!-DpLsZXZ|.;jt ʙRK؉aEh!IIi~ jR:;X,e*V e*$4'QVP("iU ZSg'zA8_96lWW7)0Sq PxKޢ s~0IתI7w4ǏG% |Q$5㎙[79,/v`}HCG$nλuk-DG]?,SE e^a6vGFWd #BT*칙I>3^>Y[(&SFYc1B$=L6#&gL\PsLuLEU%q6JeʨZ,O!]\ jW+r(hŏ\T74 [=ttC"i#}s? ߯WkD78:%ߧCDOvaDŽ,wbX'CɠGPWu &FEg?DE^TSގpOТ4h!wTc,rK(_=gb6)bf4 難v^wǮx鞑|YuSbǯwVzl6]䤊`P9R: ēKWߒUlf{[rD:tJK c0كPDyE 3xMWѾ RY3Z$1$EYS o!_:Vzݭ\A4fWoM 2EE G'Ӏ%W:]ho0^pѷZߘJъG>f OH ^CL cnf=YVϩԻ-$҈zy,n}x7^=_H UO}t[sI&ܻZrZgD 2wJ Wdw!wd0),{ux.J`<̀ 픻@T!";YFAЀ%Oq Q>"YOK&FP!K:.FR౏ru6%H}R Tɳļb@FzXV5**XBx 3Kp mQ%y }KY ga7g run \"V+J x\顅-{َKnIT/9̤_BfvQ)0+DO9 HR&lȌ0*C|9Yo_1D`]`zFJT@)'I.$k ^<2%X)/y$ +!hV0]9}|^ tr%Ў8]O \WWw+J%ߘ$FlxNlɵli[Ջ{9u뿒NW~R͏LZnUmy%[b!ݕʼ{_XF _:u21$^\#$R争[II '9K((\NC!TΉgM:yHWB2L2Li*#I*13U2%"˟S uؼU~BF4^,FGx0e-ƭ{[8R9e?91),=y` RG7様 tympJ]j*٬*wIѝ 4}e$-xK%p?/hPۑ7K8cPcWrUy}sk|LȐn|7{UCNC>/IJPBY`I(wbKX/ ~6;T)§Z&'4IIʨNEH*rÊil6/lxjc=~e'ػRV7;}UD63G_9-$lQHxHP9KnILZ9* J` LRbB1SUZ`ū;yh#5`ⓆG Q$QN|1K2 '@sQ pT"I1`PN^8z E|UrkĴszG#l&FTA~m2LO85 I9^Ҕ?nG+tiK js?E;9ݙo8ܛXߘ!- !~5ܨ'va Nfgk¼$wkA0:P9O,cfY^;SfGf~B7Sk@FZ$i~Ƚr/s\QߩK<"sӓ0kc8-9 BTmL8')&n,]V*~!F0)]b#A^D(pqV%N63a֗bKRM;O6@VwC]\9# `,A4 ]FCv7AOb`nr=H!.e8ט톮Sg?R25W9LӚݰF 꾼f7䛿e'=4Cc$o3]MO{ɂ.3'AvGBj.!zljƷK|KV)"`Th^M)!YXJ[^JO%OX)DT` |_ 4Mwrڈ1]b2 ה/J5i])JPM=C55P&S!?cT 0UOfe!JL!A*DSc)Y粢Ӽcm(潭 og17\_7u+t|cA)h[i527=h5U=+kg>8f˵8V4#*=dc+={%o{%6]}K̟IU<ϕ#(2h)K<}q%IDžrCB3jTQz@gBICT=)*|x9,Q4IɆm}Rwem$I|m̔gd==F_MQ6I(rTHð,2ȸ2"ҁS?0aZ5!-3ֹ>u` <-цMtہY>YsORk+O W"-Z;%nK!|, v*ж>^&ȁKVDKg6Ɯ<u :;H8rΑ1n{WRKckWZFQ| %V6tB @.As_aP9l Ag8nGF S *[LN+!Lz-EbԦ2z 礭% h=/ p, g!"䒜ѣ"?DGbq!lܨ)H;}$pc]2ڪļD+Au,%_ڪ DBc*:pN {ĂS=b (\kj&%"ZrIyc/U袨<:gkZXE0靦 %vzD,!>qsU|x`|]7{/ݟ0`a r F1t2uSq-S2LJ֢D^_jǺlLV#LKVnHJ";-/Ij9=[JOl4;%I/׫Hk 2i:n0G 8@!0q gVcN"`py(baK4wPQZ K1wXIV 7 W"8F5'(7RHBfЯ܀ЭulnTs/-G09ҐL3-R0/Y(3BS1Ib䨻"1S"aX[6KacyW "˺ u?QRt0J}9L^ (&?) \,[wNOU%pO <~SqI<!'>?U.]9(~ )&AX##xYM{8(2AU 1J p@ZU, Q8pXxTm}Q GUGC ɸBeNR͡|IRǩ%9Ji81%IsJ2Q9K@ 8Z5Ƣ%){ %)ʲ˱z)@"PS6Q&Pdp'Ddcmաr %utXYDnnfɚ&kYd-\]#r[Dz>cM4d%QKBt Re>Y А w˻[-wE7T"rCS.FpZȦ`jeeDK/bWwZwZnhwcP6}&hAcmiTyKa֭ABoyJ%in[c$P9yh'o|:ct/_2TC#5! LBw+n-L]5ن1AM(fmz#jHߴǪL72tZ6+CuO?]w_Qڭ#F!q,u srDl&Pn ,P0m 7̴}ѷ%?|)WD^x^4G8뵫kuoEja{Ob_3 Ѻ2."4hPK;ZQFs\<.ϵ+w-1EDbW*Q;b" ~ s6h`h"A-C^wZ;u_S:)uYcEUZWG.i1!Jʠʳ"$ƬfyVAN `v_rخ|RbN3uLk,O3\qㅑ<>:)$ ~hg(o9`&L- *nH[M +#!dkyjq"Tj&$MH4 K)d2O@+ GV |ΓcX ؿsb:Ds}7BOo}Ss[0+S ]R3Kr?.pMbK<1ҨD8v7![Rx;+PKe [e7";%V^:a2ښ\Jo[F):s:@Z:a):$cQl()-t%ʔ9,.`sלąAvuvsCu\Ih9|m]2O0dQrư*b"w']~uFcMy Gvvu'N)sr`UpN˹ 8 jRyMP`j#^:'^=)֧"Å7m7hķyTr8Vr9SJىcyGMZD+wFZj9~"7qt4Aފ|SB486_3vTMحj<0-k~OyM՘7v%'ci*ҍmQE͕AJR$SP+p/O`$;t*&J]YKYIAO*\:RrB0d-+m*|P)jF)%ǥmj,;%t2v(6ro/Β%yV1$FW.$4\Q3ZLq'<+r |B sx<{Ny ZВ @:FkSQ3X1|Q=ix977~EUE0?VD)4{cl?e7׵黽-'5DғGzr4kqcXKԚxZFMޅ *X7ꈭ6:r!"'02NyEp\PnU9aWyRw7 5&fm^|0qyfvBou͕ˏJ^?ugC_Hl_թ)P"R[Z?WdQk򉟔QHhHr(-_5qkX\58aOU%W7 Y1VBf2-.](\J6dЬVy?DJt;gJ{۶_h{MiP'6EWK,)4 R Q&(.sg̜3o6e8T }3kxY,fZD e5\.%]WrYl"՜|JOGSno(0,_Q#/UN @, Dz3: mQWu0Ac`ޱܚ$^5I)g tMHPTR9 6c<97h*ԧڥ HohQg|zeIWҚJZ?>>') 2j?F1'Y,׶P26& ,*zY`RY>x )2qꭕDzwE?7؆%_jT|,ҿ? ]ډ)m֊ B&r "$M5ᅖY`t-XZeoH92lXDH|?Y!Ɓ'%DDJyO'WLTφTW/݀VT,yTfoWc!n֓"2Cm6c$̹qp6Ym㣍Y4Klk%Bɼ1I (c+NJҼ?No% q, QFE;e TxN[φy? \3ZbZA5c`(j;Pl!AӪ! AQT9CXag#;<c< m撨*VIĖ\૊VZ̮8U?0%3Ngiֱ(sEvL)<%|6/AU/TjMZRL p@Hm}v$som"FBv蓌=yh)-W3]qv(JeIzIY(9ʷǮbDJ* ڞ (AXL g+,s4XoX>7\%7#ZP59z0dfq&; OYʆSAOcIn(ip&iynjw+#apGip$;=h)b*rDTП˜D'Q4$|d?qJVѤkSl,k>tF6ϯ.MjE8 nRX7W_(zͲ6dPM( wÏ,hWI^D Ӵ͸ѫ]ک&{6$µGcO.O Ƴg6?\_[kVpQ>pgU8 קI ́\u(>'jͻ_޼~דpx=L5#ʁ&wuSf?&W`-x)?Mm… M4An ď4iO"Y']ë7]\ͮxswf׼xbkn._fËxݘk~x{{˛?.~=>WoCO.FQ_ [}$d _Zo׭(9_֓wx$ ߓ#͜7F`C$ٔ3n0:ُѰ4j׫{|aٟ=aNENMe hZ <,,8\\AU#8Vvju'{`Zwtmo7.ꇾi5vKi0sݳV5h^ݼ=^u3g ;@3Sw];h1F~ ~vWkcGWw=7a8]7"'V~ @S Z#t/zS<8p>A׍:edͧ@YYLiHmR^ncNHg|Iw45d4cS2 ї}}KPb\@85@ |qZ (1Ӑ!SLI&"9Wp~?DI=ϭ^bF 0c:Mc%>s>Oje1@MYG^{~1cҙГC^!!#>40>0ɦKsa:&xٔ&4WQx@.DӁh3=#([pe.ZBGf?01Ene`L0Oc0 .w-)I3ePln3 3G{+1b`rM47)צ9?e <6T6-B-.Z0Soh.vӂ8lW>+e+\u O&`AZ!nIaf-/nDW1rRurZ(҅[K͑8. KI">e|F(%_ɰCPkTӄe`c{h8HS,rb|8Xh}F-;`D ^tM}(SZ D<_ȀQ۔[b30u>D@@ٖ O_Ȍkg0c-ɚ|v>:%-;nyNZ= [+R10o)Qяs=uP!.-3t^ERМR:f#.61QNJ򈥖2һb1V1:A%_FUv`]90tź\&`c^U$3?!;~娔˸~DDLvj]z[,bJ*sgK\pj}s eJDqb)d#^Qƨq;dN&UNNqVp7=gq QJln[J܉6'A VrV}]W3Sk6VU_YJ54tbΫpy57[d;]l\/ÄGI]5[SJ=\juvO)N5z)'NywɩOd4ϙ$KQax&&H% 9/^*Y!1S I6 ͑eצӅU3F̓PoL[M=֤zewy0l=vL*]H?On!wqSIlT$J nGE/ CU̽B&.1izö=^GM1T Ji3$uy m5 -s Hy\lQ!/Ә3kwPHv'STÞCabÊı}[KN9AUgjuU2駢g+J$-o._r-eB6}P*jο2S+^ KRqdrwX_iX*עtR*!3 ڑoIf}TO^F˹ɯT|+.(-#"eyM}e#*Cf\A]m߆P6պ^3lZ>:fpnggn9gGagggit[WV5>rh<%WհZ/_0/GxPAϦT(Qu4+ sE`.V q9~(u?gE\|h *34;UuCׅBĥ>rL)Bj.: W3Š&em萢 |\ Ngʼ@8F{U=ۋpڳl@ÝK$FZ<1USSc rak{r3%z)9Ʊfq"q?6A#Is8OxY?e3pA p"޽=zf)wU$ii8nڌŹyy{D;ll#g?.dq!TGHK, A>n!M8HT) ]U&!%n FvsީOg:q8^ Vo{9ANd_,׎ gGr9}CK#OFtw]qv(J%Nz$̝u#Wtx-]=h)b*rڨκAɇsg)dј=pSLHp`r+CϞl FSc=Nsg:@س]?^Mk5sflFnb|`48;`(5AcQ⌆x̢I : Dluzγ,%)Rkrl5S.v< Z8LXۈ.VGtviod"(d4jy=aca'nŷ\$\p`i1"Ȍ_9uӂOκ^7[oafu'7 <.DkG~\Gv/T<_Jq9R(DQӆ/6Sq__Vm|~L%K&9S4~(d f`KkaVbrVR#nmsPz.f'Lk{ϭ0<4lplBf=Q+ ; O't~{yYبm^> -p㯖3󝶿 Yz0,= KIF7ܻVVOCX;;c!ÎĀ ?mWIB? AKˍbSjJA~zS#K@V㮱V?{Wܶr9״ ]3cIi쉝v:MCFQzdZo&%bn3" g.lOxKw+YPd@ &SeѐT >텐ܟN qnYP(8VclDqގЙ Y}V~DuPS Gb6E\oڭMu8 zOGv3_q~꤉PׇY]ӿpog`Io?aӬp?dͫG*I1Q"n"D4PDTA(P1xewD$B$PBY4KS(A{ۋmEX{77O'7wda]>'[w};ǝ ]c<'vm<ʌ#:YZN==6 #h`QjOqX a~O#M s$DCU*{,ݙRxm>Wz|$_ʌДxN=JM;-Aq1@m7I}1wAߛ__=)O?~'hfkR NF8m(HvCDzn8x?NI1E NQ?}4~wxx;=@0Lû8O 48ތeFGc05޿O.>Y8=C7.g R&f6{xU8X{|{Mk9NrIC5lM*>fa^˜PΟz_fsH#0iB`8B{<ġ1X LT@'j[+P]4i^5#otb=uN7U!Vǽ?3&|K,{9%,OXqoS)3" =3b:Cc:t(U_?~XiiAI_r  Å?l Ȯ:JsnnYia!  3 QnxJ*1:[ρϤ@T. 408-p#a03x[䕼Dl(vymN{@U:}A7B!O} 2WP;j"]Ŋo4%!=hTEosJnyATG:f{2X{`-w`mco%{R6ZXc׻+fYF=Xڣ(0q[ OVW+p.?> ;no@g=N?w1aʎG:b}y36A{V4qoV.jRl6J>6SWl8s6 ^5^7"fmxD`3yul:q--{Xge uؐ쟧>ntdunsLVc$HinBݩkϩ+y/|G<-[1kIγҕ|[Rq-abUQnV$K7dc j9*9nWi'd삣< "1b9F^"N[۴mwfZ $nL Gnz&[5Tj֦cuw[+&yԉ~;bx傴QZO @l^+红^O#_ YE8PB(KlPp'.'.B=ÌkIH0Z܊G$&W71! ڑ#«*'¬4#P:] _. D(ø9qHCVH (.B)K`m @PLCn3w Bx0טa^}hG$a\|XPHVW&͛_k6z+]u?BՊ8̍o*"-*kRٳh텴}rJ yDB9aV0>ѾP%6Mcڨξ b~΃\031.nεMiK4!"]X~@E9+1 .RG.`Xŀr ;yVka5'Шim5$Q5ï [ 2F rrb$Ξs/XB{Ƃ6.P8\,-E˚E*\4\,%psIseԒ<>okW=]O2">=^y6Wv7#bd.emSj%5y̋1/10ZƼ3)CgV!rU"UPr58/' *8:!o7DS+Sjj.=jas'z)LŔ$$TvυJJj{ I<|J:O<|ߕ.Ij`20 (uwM7 1r$xth?̹WOBrux2ʳT4c_k;3!Үᩤ^}>}֓}0~xoɞsTwzO#)ZCo<θV "9 0]lݡ{zUSHLSDuN_G/7ܐDù7'6hሐz`m 2 C) Gy3Wk^)l)*vLT=oƕ;_u1戴i$& #>^pk_+RAbŷEZXsDos5 'ն[*[rNi!i6ʇ;? pV ww}ZBy7Z twݓˈ܅nBq\ٲs uޥF ,BXRgbm7\f+zh:g˅ 7uh9>Ix pmC[=ɣx2Dž6Ǭ: -3Ipç:2J:k:y,eZn6DOk5oU8:-ݸ0[NFU֥8J=9nTYi錣E7hE9J5tj5ѕL̗iuWrjnmj]p\\'qI Šq!\;ˡ <78z9%KPZVDž5B" F测t4ɴb+5pcLʓ4Vk-lVk`Z1ɕ-<kEAn߬O_hywbXJ~rSAf<ڪCaUAsW78y8!]N@5&>Ž%__]b_ˈN;}n.N2hIJi,lk Bɧlh&rzo_ \8`j r5V5RzKrSHaiB %- FHBMǑܜjR3JQBCyƈ+.B_Q߇v )B8@pB"F`o۠=l0yUist_u~oyjf^8J?q3fu}s*+M&#l k.G+*:`*Lp|4TKݢ32aDrS\C0}ce4@ah4ʚqxp (ٌ@x`DA_ v%HՁp.x@yTs,Qn9i,+=YhS Aȕ ɸC_pHM82D%{P3#m> l? a+e!nDs3q3NXl6W>mie"nyyakNhuq%OGc!׆g$*k9' 13Pmx02 >$ēL.5(ϝxnɱ".spylÙfW01d$c؅)&q@FkJ$aS-bCW0[S g՗`!qђ~2ׄSu [P|xIcWʊO1^u ck QZN'$~8?9*" 0 />Y Q$QќQaϹ|͑)7SvsOIgS.]+&+ t<dd+  AW" L/W+ .RJp:>#5$DL)!>҃kT5:+0]ng@e퇺&qLPf&c8YZ̿&7;(o&TOm0B`fyd.潹.%sVd.x&0Jys8U'k VNW '$l@ߗ@oe;62/ rg g Q [` 8KՍy"[+6 L񬰑n3SSu+r@{.Bg!ϣ~1kDN.;k2G#G^kUƔgJ_ gK|Btk<2Nh=g/.."q2V_䢺A3~Q?-(%~2s:_ J2^jw1 'vA`"O?7^.ZnV36XM; x5';a/y ɭy vA;SДɍF:ܧu'܌03tž\0ut{Jr%[盟fPKޟWTHR9<҃ѧd%7cR4.a%2= G 8a;#1h2T`RFAl%ANF"xȵ]?ErO.WrO.W͓% =;9# CH;xzz?؈Q[_,?@:T(RCřK2UZ­څ"qg"hZBB"R@4Md窫[-PM!do0fr"9T( ~T@8(FCL4ZQN4(r$Ei#P T7 0毖c_KF8q"|}nЇ}nЇab\J]15Eeػo/;{#}GG؟ ~7ghW7;^0p~Ҫm~ެk\|,\?.x}&,B ax)|!&8Sx{:}zAթц"@;3Fm)y $!A'"Q=-M)ND"%.2CvYIfCҀ |>NCAι:$䑍$3'C@108;ȤLTSF-—"ļl$s, TZ$q$B ΃ dr)n,=*jty5\)V[.^Tۺ2WL1\PλR| 5st>BQ ]>AO.>5=%mH12m/d,[Pm\قv-#q$T݁HAn r#`&Hvg2" ]xIU[QBNi'ZS#0}1`*jk$%9ₖ: c_iŽXL ΑLDm`S3tqґaŽ>q D;vK=*b!Ig.+UXevV(#p&A(e\ԓ0 GCMs"-܋; >F%n/CDjb/#WzBO2б#4)2Le(q^:ٻBu_e#O̹Tq'5]*@P6P([K<)&rfa8tjB 1Z4TXYͅ9)G 1Bkѣƈ&q>Z(+)LKJ4vm"SK1Y%(>vk^Q85 YmE#sS'm1(T!!?=!wɏ9-N ՂR^:z}ȮA jˏ(4y5j6շ=)t6??rvd Ck^wYy$| a('vi۬A>XFv.jݨX]lZQ@|mxpEO5;P׉6C`DzG#,MR%&]3ҾqB;61na|ODADk_vW)eq]R6cPR'cI T L)Z +#@@T=δIY\ܺ!HKC+$$z:i>4C\ g"AbRAy9NaJ5mڑ sߣ]}{yٝ?BETEosoӭ/\]v泿.m:v1py{X>^Ӂ_fѾ[[rG;ؼfk9[ƘC} ]|\9ĉPBPe};9W21GnmuvqV&d08q'E`u@{zM@ߝ}UZ}6&Ź۾tۦa,Dan=^x=sKt%_cv2K9`I:ch2v56f{rxՇ!79j՚-Kvx*uBk x8F>=;ctҰ ۞EyӻZ=UI3%֫~Er]><͏O"09O(uG9,(ծ:P%S@vIH~ ׫v{E_M{-rf`Cx [e}3dRk6ESWǶ׷r᫛_Z5:<_ol/ 1Z#c~'KوiDz'M%C9%O`[$% sr7y%>Q92fNWh3 l*-7MYCkSJ _ c;;/d}ySe:Txyaۯxp<H8DɯxWEElqe=pP&Q`]/_UX#Eq%s8,V9"pήcܮ,-4*bʽEaCܒ]` 7E(+ GD Ls3EķR(*x-ʡwp 'noz%!D3hJuyW.}[Y HFVI$B+Y$R Ke2ԫD{cmufx@9ޫ_G!hh31(`4i: ^1j^(g+RОh,VucYP˫"aÒ؁u\a&nm䅨Gϙd2p&%y]I&GEI'WGDU:զ<[4B!0gK:N힬;oV6:o}@ .!RAV ~?iQx{Th(gL2*:TZoGZ28^~tsEMlVͽwЊ%6m2L ٥Wh=OhMhMz@cӻʳnzBkrhlm_)[Otc}EmՄIP\ӝX,.F74h gճbFy=4D * Q CO\={{֞5#ɇs_֎:C; KٕsL=69[Rœiҽ)YIŔwi/r8'\p M#tBLt-ҽFI_9h@%^(?"to+n]V3dqn}W>QMDJ\ z.j!^OZtr.f'BX<iH"uTqyqJȁ>#KM;zrP1 AI>Z?9bt( ai5)Bƒ\]֛Zo|ҩk5-CQCȭhV{yv}=[ jELX"&I9=+@ < Tk` 9-g(Z1B9VZe5qD[h%N5uf#VA@p#~pLgK,/[-ݻ:}BVeP\rUX"z<7m1y!+g1BJB=v 7y5'U@(|I%[i&pWn#uIbN0n.."WӁ.5" `zAXc;[xiJGGy(ɛy8}!䌵+dR65nS-n< !;u>S ^n}]/~#6|dS[y.B;ڪ!H W_A$nYf4%Cj7x*$g(3våB%Y<چ&;ł}ǣ^@ͧkz~u-_ډRBI`|G6R5Ne%m<\M( .$taXC*v{=$̐rgPCh$>X6ôF;7bE(P{o4ͽ 3}Լ}Y倀J o']j.v$-dp?4 )]w:_%.El_R>J# 6 Vz#HvG9h]ʓvHŽaM3?VVJ %m 8{ 8qE!ڗetI$lzAHKпzgC@W"= O<-p 0AkctAMs07`jV׷Dж;P%M=ho.ƈ1",cS"$`{;՝T?eSJFu\/5z3%r 㥹8P6 *Z&d#xT X}[CI` = ƗqKX*aӣ|t-! eGW_T1* |0nRAUhp!y,̎"RO.!Ε^k3YVs= _ٽ2.dPXA~APܠL+h2b̂驼"NvV5H{y*X#X!޵$ҌtVXs[wI'wjikc+#y &&6=]\v7?f-·Ѹ,$ ??8{΢,:{΢,O2v^yp!!QRNo$HRrjTk䕧LO~-|VjI݆Nbń87aى[,WmN.4̎Ö3qخlYyop/6oE.ZݘǑqg$D 2ʼn΂sVU*k c Bp1u kb(&DV/D4t>hw ì}XV R( q߬e9?aDm}|PAsh{6 6@pVJjMdje;Ul3I CQiz`XbQ: )(`ZC"H\BZ/4:h uXok "%/îrT4( [7q[N ekI>IՂA; 6=~I}D IrQ}휂#562?yGZJRZqhlJ X}% 0ho܎ITTD=fq]!\H)%6"ĺT .v;‰#}h掏R+ hQgx)⒨uz oPdvXJiv"I dq<J谐ꅛ߆^,tÓˏ~0&; !ݦsI qDqȆ-Kbcִa$ķ\{f/aVcD$/=ɱ?($/@|9LzfjGȐGΚP|t 1i.L⩤@q m kh@R4H;SHmּOAS}E8[` h9c4JGyǫݠJ C;y@-θ h01/)Zt#xoCd@HW׹ǘtV(m1}M1bVMtwC{A5 cĆw1!l?$1Fq}S5y7ړx3P_A Dࡄo l<KY|rwHdC̅JWA|`.*`T艈7CTAvDyEBZ,V2Y,©Xzr5>_;<fx`P0: 3Z;yso.}obb7)oxs' sI,HZ!Nb[Z!B1.yJQ;Ρvz)"G\XC16h(0r !< M5V%h Fkm@+$Y,NV]oN YGb = ',[Wtga$ ӐLȳ0AxiʈSq<*平DQM,?`%Hs`j']kD `FàC9T۠wLP2(㇇Or,Vx&fiXB:<&O??|sXA?\tq{~Q" J廿A)0OɝZ_(<<\-1tQ_?:ra03~CM/֦kPqGF^3c+`,؁Ը͏Jdb|Y>> f>\RsXl`@f yVb DPJ ai[<;?٭ |Z-bfkj-prdyrM瓩CSJF R3J Dj\Ў҂VgDF _CPI試^1-Y_imӵgOygOyS浛B#VB3'b]]D?,g,Y`D*#p @==Lnu6~8td{@* ~]vÇ]:D 2g `<@JyA_HcQkH.9xdSEjyӗd\} A%]/M+){*-OtjnV| p|wv?];&ɬ8 tS:t_ϩY]܄'kNkal:@.6<_Ⱦ:TԪkiA0% _?` f5Ժuz^ dveWe#T?۰>1ߵ4)' `:.~w=4k[wNB$fի;'X]tnPiѠҜ+Pd=9EB42/e"ș$<92tM!aQ:pH[nz;*jzYm&Uh}. }aB041o;(%?%H sq*R"D9F0ccqe1`YQ%$ Hb1T(,E(vPꀆI$06IKFN@ՄO9|u6kZk-R}%!&4 ?6& cJgq||~V2CՆE.n_za ƶ$[g$Y$"5$E&9sI<ϋ#QNn@;lC㛷oT.3mR( uH#'јpo_K )T򭫔(Ы-kID-/Z. OJrFn[o)4FoeI`e7m",+}^,zi y.ufeL:͒`8"gkHJ rpy`NRl.1Sj oBI›)謧1m;8<'H8A\ۜK.)er (Z-!j{jXS]~$vߌ]Ƌ^"kFWSc7E .G^3E6ym<,ŧxd>yvm:| |!%`;vHs|rf7q,!H=)fk8-:I/k$'(s"$8yHIi.") fԢsm˦P=xGׁdno:) BrA7/R4yaJ%R"ޜJy CBcL \swcKWBKPA*^lo:-\j NtS6Ҙ483P "+F,FN{H{%YN5X40 yV`N>G Ӄg "-[fLc8yBkpqs ` DYa>i{`~L~dB6} ] hZV=iPfPP2{U AesAS` 1 CNN9ȩSLHt֠KVg(` x![ SiR(*x-cq^#PEד?/%4fec7-,,0k_N]4 rB>#@qP SD">ۛ=yFLufKybʀk dO1v5%Яf4̛hUEzNQ Y?/~jN UcֹDZB:0LBDN[&fYi`p[ETd>`c9ULă30{IBh&EB6yU "ւt7B~z4Dʐ?~xTmRofzA$q18؏EQDʻ ђp@`TQc` oKaǸD2_(rR+2#A*Q'@0e +DbJc$PDRJΔ8|F((Q&xԂ#x+xKX/0ܲ33%B&aKb"H&! @/4h*0KB52T'sD@ q)о-{Ԓ)IBiM!1*`xLccjƶ5ʡA[} 0ץ'޴ݴqIUSߏ`w [0p KޭݳW ܂u'{Ac:W1hewڹq: ^tK<nBu{38v[^QT~>rw.9UVLd[]_|{KGƷgJPym{xx4q`%;~OomOCtR[i1痭d'ۉ[i >?%<.;Ϟfwqr`\>^ɾՏ{j0IcW2kޙ~̣3̼Y=N̼I9fxk}>UtAeDX+'gv&dZ X5# 6 9]5刳AT[*=9E=y;+nbJHsCCԏҾY;-Mi_6\0p \^DA\5AOAz1wKRP G;PI f1baAń:#RLT~CҠ"쌠JZiPsڀ`6FbHM[P%JTn@ s~%ul,MeHcK(:DM/cPk}_>%k@PȓK8e8XwV0a`'8L,('"0 `r-j Iw%\! B*~׿ddj$ LѯuGTSw)쇹nᒗg]ղ!|Gz;dh%A;{ "w@7o8ZmL٫լj?ɚYOص=1zrt[ N9d߮&D~%yo ; ش.BNh9V+S*O椗X>enAouj޷BY}'6615is|jkRMfR ŨlM}~=a *BXoAoA%Q1Gb)VrZĽFw~="u/rO2 jƘՃ˼`a̹.+{c,T=*WwpSρUJ9qc}|rf*{3PGyqs4={G d+|tG}5֟B_/ O%hWg?{U(-8LS~iJeQ[٤98(U#f0fi=KuuKUI$7JMSm7NO 2B_;"w䧋"7xLp=fqEo.8be{;2*1Y;$u=^/~3o))M7dD\".z QZڬM<畻vkL+vA}X]Z꩕͜Rj"Wnl.0Vk\o]oUdߨ=(_L2P5}^@\Nl ]0l*5 ;p9D;syR8.\nG;NdQ `wOYrOYs/OWO>] zj .jﴉ:d/}<=d BKzT+Zhn xW9 nBޏt~FT7|xgfu}p]k*T7 B"UFTPKYլ\5mS}}\(}}A~W#_]qvI]r# nv]߻p>^_b?epn/̥?}A3B:p8Ά 6WvcR⅄L5KeC@n.vm=AL9S0S8p]4b0۝.i *Vi/~e mKMi͟K ZY?y:u[ r&[y.=Я,6i!l3n<]~t[xp/rm,dcId݀˜OjQͅ11Q(QqІR>- = naBbmD2FVA`.lGP>$7$ CgFN Ǿ4DɲA^wHk cn10575z NϫVdf"Oz%o|sG:K 7?2X'htxf:*? **O=TZT*CV 0֡oމ:LdHm` 2 vOo(s3O"?= F:?_]pHt-K3A-EPF-% d?+_CJZ%HF<=փDBNŔh%=25DƬ=2,Λbܭs Cs/kmjr6/_?6'Kpcc `8f KaZl-Upb]A\r%-g\5}i\ZT[('/}v;s(JIX"w0LuI .YiZiNB=ȑQ٥I3ocfK7ߥcvl ND7T"ׁ=El6$j 4Mm+|6on⥪Ev7x[|x_ˉ/>\ltm_{NgOg 򹳕ÁbVS]hy~ឆDQ~3{Lb,۴Qi⧖} y,<;Llr;GRa.ߗEIOi?Cy6B=ƻh>l9usyNq$XZ*7ѢaЮ]e8k+G9Zrh-fު~q]%曆Qahe{%& b(B E4Y5)5~,?Sd-Ϙ!*2T:T6NYۦ9a`"wOO@"`$j-H"D6PzM'jFi;2+@'ub0c@S/6Cus>V]s1޿fhWt!:1jbd!{ qppvtX8gX_`Qp ZEnm$xZ,Y˸͹剴 fI+.~rqzu~бL*%@j]"79Cj {?no壯!;\-˗%w5LvySRH&˛Ls>+9 E9NuB.xq}>uDp葾DH kqN+'D-Ѡ z!dAA N^zb mmRLI 섖D9` T319ǣ !d esZXa$GS QaHk_R Ac*юD7p\)0fCgFۓ" M{fIFhaL,mi6(g" !C kc ZR5j- `+8&4 ځINYi4wgH#0aXd^;y#ĄAdw>>UfJ]WvK`|#z]6ܜTɜ3P,q.QS [i_C A[UGpsS ^5%k!|rNGFLNMF#F6*D/%Ǯ.q)V55ҶR!k>s]n2[v: qVr.Y^׻nY>z[yj4[qBy&=V3͐[='llΞ{Mqmլ$7?yӀ1ݐ;yF"^Z:)]r8=@ >  oujQiM~K}TGolzHĹ*Z@V9ZDAYq(Vs$FasػzD5fVX5{#cRUdnamAx%wh]fs.x+ԲWm|b̼-Z3b֧kŎ;~ʗvE5И"N. m Ee?SYB*aM>66 R y)=g?9/۶!nuO%˘gz%Pr10yO1Wl9͵xpԭY!4~~ΙXC#XD3 .-Hތ/֜ gMɈ1sOo(4YϞ=V0'wb0$q(Ü"L1ONN#q: c؄%Yg$ƃs|`lĈ[PA.946`hM8O]*NV^0wM @td֚m,FmK\= /ЊwÍEXkḼ@lqt-]+f"jWDld[:28]R ?/ryP yK4iMbڔnc& R1Y:8FuhOlM%O4VձB^=}M)$947kxD3oǼqwivጞw (T/Η7mZz7Miس=Lƍiqϟ!d\?\mҸg8=.i,Oa_.P/qn)5eB^h!0ѾpA`JW{͜z 쁟#y'e!%W k T+HJBzcR(535Bqu*G˴5c6憀I|aWRckA1]aXBhP,c`+ ŔWVR&rQhs͘HBǠ':k*rJP^`eq9 F+rd0K+a4Bk+cm,RIIw;㌬!( `bꍃfto.qLWRY:GV _I[Y¬ž2PdָT=\ ~8˲uaȲ{X#-<,2a}yg>qxwQpcL"MiC#WDXRVNٵj1߽lJI 1x,1/Pǭ2;U H1Q^Z+n6hf4Kqor뇣$A@\7\dLsyyTIt)I nrTd*H!C]4:5,J680ZjINx ~T0@J S$T? >>}y-b֘+JEFS\^O.- ҂/\Vd<dg7L:$hƝBڂ-@:h$p\;x` x2qTTKC'a1 W6 c$i6QΥ &'/u8O30,r&46Uq/|%}#W:]AEW'H-CjZ3m ТÚ];@t(,=@:LTT( *yFP%KMx$T)Ji {^}}Ph!oa*o.~x7L&r\L!K y0RRdR0Li l(J [j?h"P,w?_UƘփ`y.1׊1z62۫f8<S`n[,`Z3@)XѦ"l٠WD+.һJVj 9C2t_ORAi?}07\"clͯ.v~KIe#d*+ ɲO77oE1iP(-ቌ1\diysrǷ4ؠX*B 0<\PHfasĹG9B#{8׎!c&?X+˨z5" q{OC4.S%Vq9݈c5dq:S<%+1I HY<gjg2SEb.9TF`Mm# ՗B0\O67#3>ŀ$:9ƼY鷟's)&ٌfZxye Q-VcW\@9B_k!DS CurpWs|Di =31g; i# X~9J+Hc+kt! Ll #qP,ڰL?P;߮ܯ\kխ7-N)k J^e ^% QXqntns vuajP5%AO'[hj1 -GV`(hg!n\P~ +D-"@ E,|P|2)j%] /aK &NPHU+rǕBQ^0'rd)s6+gY:~YB`c꺓O#Tnݲc "Ċ"u$M њ3 $/I؉Egj w0!u &`ywwQ^>?`t(|ju2d;PsHOĘ*/|ty5oI,Np}Ϙ=z=F䲭.G%R!y|-@3 F% Rٽa9xA&h>擻=Fq"P5`/fvxw):)~vɣ82kp\1op}iFd06ͼ_ɽ4YH}bʾ3soCuXM,t M>v>R/G5_bs#6@lnHC Pٱub?1/qg#<x"0=q;Z%S4ݑb+"%sEz(tU!-3OQ(+kis ѣ$ǎ=\?qyi@ֽ~G}tޡ{(#4&qc<_&FAcS9>kwYve(YZ!IU ̾Q>}^DȮ#G* E!o˶gOƃ-ֿO?_Y2]<7D4fS=&5ńdL:M;j%x'u+ϛx=ͻ^ ώ6U$aIՒ6,J> ,?7Y3❃߬7t/?}k؂T,_&0̸Ե[Mh,Cy<4iO>h:<=MYTۧGht<=͡; xwG b"YGm7 uhA:SͪnUh b±=pLZ m-J>=>xTdg'e$?V}sa_$Ҵ_,#Ҕ֮ߝ}(+g9=f5^EnWGJ}DZ#V^iƜ܀ȡ+F@O7]pBmFQPWshZP|z5,`S4B41څ"kB;ULb*;1wI{ Eת=JE'YRU{&4#NV" H}sy.0KO|zfb=0q]F5ΤSUPvJ7L0autJ/#k\^*]Xf4T@YS[0fR1 o]hye 'k`j00uKIHl;H:%}QyBy(YuVS›j,S)&ZRNPf޳6q#WXrh Upe;˞S$R.0IL*@V+M_jIM%ozqqFYE2B7W?ZzSk㵊 _w 2ؑ2`54K nd^i#N#H*EnGof O+g \Zj;xQ *ٜ" q+#=.:,GMPW>Qoy!gH{*)QS ȞhppՏ}h$lG&@pM6i/˯TS rII{9;n8KɅrrM>TZ*KRz !ا*9/a!4Tv >!E%IZ!tЕV:Kd.3m[!c-QK)ҥN0kL|8lYQd(p=s )rdZݍvł2&ˀ<$]s*CeMuy47D&:ۧF&Sޯ a#ܶ=mhSv?~jiVwwqo.Db]؛ɷЊ][5W`}jE0cТ'"<*K {a&@K{F5L:ђE05j.{N\!O}[b# c6٣Շ?8.w#0͝ħ1 iz.ubXd4<{;r:+m}LoX9Mb,BvLmg6Quٳ=:= 'qzAeDtƒ49w0:(^Ҏ9loGߔ׻Q(zzpqWG=;~ &w'koj{CvQaf}S>' k\>HMmB%nn>*\C?yHfڛ}\]=H!  >.IWfUo B4t6UyEz3?+;Bkd2s~gIJZ[ɍ{딇),oF u^%퀰-ZA1<a8WE<[ų!9 W1\&=0ol;Y@bM/E f#ٜt79'oi|pYbfbz}w~FG?FdvA2=wrt5B47Y%ۯD36W{K_<7* jr:>?BLէgn3*S2Rڟs=Wn8{gNFٳǏ=!<1+}L !21傱d!:,n=xsȤO^2!χ1')+ƓT.:C\h|BfŜ=S|j4;ve[<=hʾn JuG!{ɋd9L9V%1ģ^Y2o62*\&P-i^׎R>תs9,JZb2hF'U. hԪzꭽCpgLpL:(Uz1iddHoSrhP6k@-s6 5^fc$R(0@65)h4nEe{8! Rplb),,FB$iub$,D {VקOydBV.[QII6en+#h o>^c!u![3^Ltk鍅HV=x~!b lwgz?1euQwO봵4*LF2ߕd,FB[8/H"YgHKGڝXh -Ӥog\^-.Yv:oX)bO.yz/z3>7?1iM"χ@`Lfc(O<:M4`xeL _ y4LoTOeߑGҒ$˷1MߏFAuba40$sʥo# RLL{RoU dZ)y|Դa?;psXl 1Òk!H:*ɘ<_ *}!fQ_Ϛ0 WܯlDsBBcn^GҜww8oBuK;XƬ9q`2\At7!,qKu {:mi)37ʠ 8RJ hMfnȀvb9$mɀH;1@g %$TkRd$LmEWe4A1Uc\Xe!y55dubeD<Ý|iDsAT]Av0 D*Ι޷*O"d3=!7 vծ|̓պ7 va?*?%kUwQvo:<T kx v- 4UJ6x?Ag>/3kRP]񳓑v3>k2³[-uwMFbwMFb%ǤЅXY Ĵ8LJ]\3 5o0B{,:g}ly9Ū~F!Do 9¨9 qgJ{R/{v|Bέ62c\"[dbJtɌ|PFP빪o3(e~~K녯}MDF[wi+qziFPZ-h1Z1Dp>/ho(z}9hv3^(4xlBu 5ZshoE$]qBi>~dLr^PܔW8䒂A۠J#m!Hfd,9S[yOB3!$H@=Ih "VxC;/yy!#WFHD 8Mp=!Ą$g9iϕɖLeFł`:Lhj*`Lphm.>ؑGodA ly&Zn5Ԟg(+-;3]A٣}ƳI3gY6#f5D:'H9DR>A%Ќ@hcxwEQ%a{#|aGi5ɰp3\yEY<#YwmM߶w} \#x4Hb}-egMrI5QV$&C&KS*HGG!P'e\PVR$!眶hc9leIbeJoɇ"(b"+)ȁ0[42-ѡ lMf&Lgb2iVVՊŶ:Ħ4zOo-kD ,@eNԵ'w&r]'jWƞտ4(]6(xdZu" H0[dK&HsvwכC.D*LdNa)C0eL':- iKܱ}w°gYdvi,e -e"e,SA~kŗ<%[*91צ+Fp(˷7)t/k>jjյ?97;c.Q5\7uG%ؙ$ ;m<>/B|\ΣIZ|}9%-2 ;qI$Q>G7FȢƺRyXx .lS.D!S0d濬-TъA5hM^XQLRd>&*EE!Z [$AKZ(>V$VHU->"O86\v0CF BrwӂlYυ%Qq3 [YHq@D2ifEG2Qe#4WUd[.`x f`cJ pȚ `]"7gT);.,?\ېf(|=]ݒu׋*h_{!fi!ב^m:f9OjE[C&8OJ` |(#̴H샙&bVXpɑHp9c)4+gL"#B#$)ز@*JRb;  /}dtTQ-p%"OU+!seZ8x(We+$Vi9r4'LZgІ,a<21 QiA)+RdKgɑjVc,=HIes1AL@,]kCk͹3^R"2}֍1Pw5x˗Tp}h&ˇdg%ҷ5FۺۥoFï]n7:ILAҙ_?t?,~H!݇~U._yҀ:`{-Z8U! 9ܫIdU`jMvH+1r6*![$56X*|WVXx,QX$j"O<m͉qX{qr-AnCK%M!4'80PKwr,MPXZpԪ敎8!OOβbrRRo)qhۭ8fk|A,(Q ݖ#c^iO-PSvo/v w;hȁoz2bevSrf8Gpڍ5;6* I"ac`MCa%-lv?݄EmccQ//8 $Sz莊UxvTxþzm&>jR7hcwGښ6Nb|{o$m0^(?k! ĄRd NO DI &fs`: K dgZ`fyOuUޕ-^`;SָXOZH$udJo3l2jev* u39к(IXbZG),*&BI(Fu9]` θD$Aa}U\,&ІUuj j٩.;";B!N|[|]uGjR.SwjE۱g>"J%? dd)eu=˾y76,5nT+ }}ԊKK>4 Ǡy͌ds}-黔޷qDW静,'G<_wقqiU"br3颷^d۸|ƅW/.M6m > ;y3O'⪘z<%cUoUh+g!yLuL@>:V-CuwΘCJtj]ŖjS ] 113N^uBIgCX=Ψq=v 00nG+#c ӫCњpi=C܌e ]aHx _Q)a ~}8mcwLTW:Pnn߭87No1 -xq(U\ Ӛ >ܿ=QRkxJ @dnqp754Wɝf%섬3!5YO5Qӑ{Ok^7|+ȍA|~V,-ן0(9vԞ5BjU UUHݢP}g*J`[.%r;]ʲ r䂸sTq@殹@si=~K!̺ji9zræhwcѓ/dLQ͉wh;Al/jǏ yIy)ˤ1u'ήTh<3WΥ=A;ǭ$&H4"(p =ak\C]ɉ3PLo3,b.VDG8Z Vf>(G)H+{IbncHx\ә'Io{8$N iɥ#{P[RC{FZIG䖃LI٣#֘ nϟޗR{{cYÑR'{8"~^FYP! ~OOϽ]V1Ͽ(~|ec?ᡟvu[7^ &`l5Ev'F\./?___~x{I ƾ_|=3w[_E[ 9/n JΟ syǟ_㮗e"oڗ{XO'yͪ;++Kz_a+4&x\Ыo?&w={L$O´VE)\@LMٞo<׼q_7|JL.B{r].NXKk r K]7',K2:aiњdwZh,0mOs'B 1L ε}\vRjŜ,`8o6^h9(:yorY \$9lkt#"{v'ǥae $MtIg^^һLYr̓^8,Ȇ0JotB2/];mDz:$fFm 1XcPR:&:YP13Cz]C~gu֘BFHNdԈ,I dImeH_hk7v])4Fh)I$Ƣ]ΩY FX͝HuVI==3ZIln}ȷ N8:F`tf.<ވ. ]"%3]˛ZQO4fTD3Nk-,'C*em*# cD1Ju2q5 "iܑ%cJ6Lb1kl*&` *qH J1z UNTd10gښ6_aekw_\:Wg_R2"^loHC #)'A====Z8w4sLaBd_̋3T'Ibͽ'Rprh2]QZl I?Zؤ>cr -VS*Xaf/Y suuR^(JJƅ֫;W%7U`IX "'FݷrOYȐfM d!& i eGe;dԊKG"UFV(RL3ƙ ?$6VG(El,00@A=[rS~)&iJ=L' մEd0x0! xAz:AG{1E`6sXk Җ`!4x)|۲ maHpvM(@BuS~s֓G@R!_uuO w^2|=LlꠈBDіq5F3?`Z۫;h/Ky{JLƣ4ZQjfZ{eMդq!HlKnD(GYCܶj;Q?*d/+kY{<vV/;ruuߒr]uqUuLC0C<^ Sdy+9\9QG-u)OX VS M>56̐buW.x$*[quP` F)ɴ@%[9}Y~&G*+s&KdhR!T5g}x5g mْ[zn ;mO3FUW[s" H 9"b tSo̎(J d Cpޑ.׭jS ]ƙ[ӗ[6tfk.51(uʗ. u˛ۡ (VUEͻwx8*2 I$="n3CߓTS>. RP y~6xlZ3MיUlØiJY31 [e*vSyg)zY |t?y( p<}BMR׍b\AD)簤ӈ:JrNKo.sz@:.g[4:ײv0_)g>3 "٨D JʲH& kpqHRu79OQtP5z؂X*'B#"VU!ThLHE9=@嬄 ň7hloƨ˭c Oc0=xl~\_8mG՗薖)|^hUxT#D|lu yAǥ^ 3|6}L62_]_J /T&oW69> KIw)%0Xr#?xfJ,>غʦzsr{4ɌUO&[KzW3w:|mlN/݋FW[<+,V4W18VP3},H!#j☒ji<(+X99ċC f4UݴOJcg/Uˌ^TUk [eI!mܳy s -j@5Erާc4>ksGXJN)em MPLa/".b{[_ }.2 sp;Qb aO"JV*;[rVf~83Fa4G*f(Y $p14שqlPTSKg9bv<oog3Ъ|4ɶzXBfp[t0d̦uiZ,>_FOQ"f;z,Nŋ>= 't_ Ra)W"10@+ i81,'i,i(YYjN>|>6Ԙ(M.> U*YD0L-rXԨn/0!!3N8Αݺߝ-♤_x\5Ķ}GOfy]|z-WK7KNwO_9̋[IG07W!CEo`6k&>,c oSzS~gߎ&'L`Lz+`lQ7<Lkj|}goʱ$Sw4S{8,xX\N>x&cudH ]u<ے\#/RrMRm7|#{Bkz!ZF0)5$tQwߝ8[r$gh(m%iUY.fIĵN3 kXӅJ ˏ\djVh %PgI4Jcۿ]܀)FD <" D6@T˼@2d#)XR/Bz!XnwlDT -lZ'%` 'p劉)b~ c%}<".P'C΂W}\+GoMug|Mh.p~-Be iҥ֘L3".Qr$G LF:$DVUOSrIɳd9lVnJϨ0Ȼxp3cFe^A؛RJBPD2IEY,T*O-꥔F5&!rp6L։ TP>p\!ƌI|ׇbX7e:gg5 %(ɱJqYxFDXThp2S fp SjGÆ^d6j&].Kg\20x"%'Z9E (-7]Um~kSPC ~oifnPUFskױQ$YZ}m`VZPn~hIEf4>EqF yaQG0җF 4y&L' R<3JX-K%kiW3mLA Gyø|/GGee?%\DmšJʯr1J*u^}G˒K;'D_J0EAy.ya4j=;^n%OuNخ]3G486}z0;YV> *& cUUtm&[c1BZsxoWxԂ ؘL$y9)F"pXPETsFiM)Jfi{ROx縚ZooΝ.5CÂ_ӥT6Sjsw_Ԛ;}17 A6!v}51]U۩&Ny ?8{]]@7)L2 6E3V7Ϧf^ &?8Y g2=ۉy;cLRdb :28`=smY8TҐA:?mZ7cgnN3X3߶u _TxZ&4䅫hNq$PsgzO>r2{.쩅T^Je#*uzB\rQg)ˌ^@W<+.t3VW'užf+10gTɻ03}Vy==< aJCp$luML-Pj cKجA iweQ3'Z̞ Xjض* 5tة_] b—jp=EǷ9#{7hb&9:9+? Bl{[߼{w{I~W:.LwN?}"ۿ`if7[`5zm3s`6mV{}퇏Tb>O$it7@GͭhA(is<%d9m~wiz7'P| terVƆb=}kvC;+II~E_>mZ?M6r^]Q:]fݦ^M\Mm+4!*PǁrvTce^ndf]E̦c7Jٽ6!û`>bH$$SI$<+k+q҃GEpjU{ʼn;Nv-55H}xC5X,Rrk>%(\HɃIIKZZLH4;!/H䲦~6ǀgH)Z>`sSKQ<K{:jcQU>:sYjpa9c!ͳD$ & DjbD bmb,*OP#]*n6M6sAI'#"HXRČ퉎%,,#9|%4 7d[^j(Tu6Cp_q6sP\PK6*iL %Ӱu@#Vq,)B#BwF{b>Ҫfwv62&mASsӄbYL3*i,KZ8FyJ+@!`[v ci*#RTHF5MDd?iL)Hf\pJFONLi2fFϖ,0*>5yx\|G#O 9"bjAKp_JP+Q%D Cq>"Mq{A)yaܨ߼8>zrIتtx293LvWe?s3z 9qԶTCvVVmX-޵6r#"eq΢=U40O,fy M2,(vےl[jJ&*֍X=NY7.8.֊[ȸ6[Իo [j2:ފQU96U -8Qh;*=jsm-O Ŀ^δoCioBibFM$ɴHdVr-OTrA^0ts~3"Sl;V|íW;ǵG*ps Av[ۊ`IaOԷ/;#x/g".gG]֍ջ,3aLM4̃8e>n8VK靳^q5s7Ȝ_ ;gƆ=/cCs_]Nao2̗^[B aJ=EJ'zkҠ~r]3v"c^=2Rsrw~~(5FWղ!b6P1Z c f&r:Wfg#a{ Tn O+ &NJ`l{6MB)x9Pח!mDcEZ-`߻g Mf7 ^@s[7j lj~_[A rHuTS!J#fT5Jw_5s56o33 !x1*yg7+ԶoXo,˽GܰFl^H*?f+[~xjsL~p4a"yʂqp4?Mb!O=9m,&l'}/QYNDf:mJbWjk (Z6V15qZGj59OsO؟'OW:I׉N&?[mN얖j=H029ƤߦɴۘLJ׿W8+K6#D!Z8gjs靽3hߝGY.-ƾǣ_CVEgS>,>gV)Af2 io "ڀRzi$*-SB6"SLZK34 '+ / Asb)T/h1N_燞6iLB1';WL:rv6[R)[WL|ڱyg /kaO +iwW6=@ymRgSL腏Z3Ǫ0lĐ%sz+F V0 AWJ/扇V'Xb[^h ^9=ljiN24A{ r#RsNOS$oKKN{ 6gȺ8  ~AiBpb$Ն/Bت߭_EjT!A%x]9WpZu3X 6 $s􃱅o:M!f l˝fYJLɊèuwR/E#dQ!*yskmrb3XeoӚ6HCZAK%B nX` 4DžAfR_w$ F3d&$V̊o4nhš 39H`W_ojٗE|=h\\U ɘgl6Iouݵ֢ܙ-Zk󵡷_C޳taU;O]Y @;$[[يWP [kOv6[u.PYC%ܛ !#גZUtp s sHsIv`W-9, xھۃn)G_s9|@3*89/28s;Ӊmm 5njip-Zkar9B]ݢ^NPn\ 4ze$[r}Q]2j/` "Ir"h:#!X to g ~ \\V 1XM]0sGf $} ,"$o W-LKera9Qbvc9SmnvWվ1˯](L ޠ تyhVO*ֆyq?bzgj3D>^ddCS1TBJg5Qڂ*вf~lUl ˘rVїVM1MO4ϷG[ #7mAU PX5HLzdžXR!m!)Z,Wy}`)l">rD 1}f ,c0t߈Q™sF w>978GG>EqOQ6OQ,F vE`wu :JY'cI` a2DMBIk}~5IY^ϓp4:䧙L얄l]e6s2;IMggiyWfRW*ǒ{ɵUu.5X5rФ:֌MsOG )jC{-B4չUHE{\`7dxȿOh# ZdsdPYaRw5emT%\BdK.3-Bg"K)yB +#m/!Nϯ.ȳ^F+WU$R6$HU|,,r5sd ,0 @Z;f%'"[kɖ(TvC˰-A'hkM_?]GGG٦e6Q5׳7C\-9a\ծdd OBO+W4DRq1Bp\ "]R eJY=KH2/P2tvMH,6ݝ빳xzRm}EzzO~bN"ٝ?O~xf>w(G^P.3L`x,n+&wG]9>τ w`-sBi>_2R]|c8f.wPP5NqƅZ-,wdJPmd*+˿ÏВ$J6VjX#Gca,B%DJ9B62FE%[UcJ$ Ru1k]qVF,gHa}a QG}"%>@Dj\3 :iO5 <H6 ER-rxH Yר49"{ A:qY \ `Zc$P ЍCՂ` 5l]DŽ(CCNq!{os(K!ECmYVe9'J jxmp Ӹd Be}]0Qȅ,ɡJŠ\mscֆ!3n2\KVɖ$ k/B84#es"Dn/nVo>?Sb6I-9dS1? )TNJ }=&kN NN'Jɠ0рy:Qy$$9$vٝ׷3Z'Fr:}&I M3m&=7Sy㚫^l'\O:c}G6G^{rY딀lx@#6IQLFw'(xQEJ + dI@άTu0E) T)yQsCW%c8Mj:a%PޥmNIWeb"PC4PBHo|*Z!wAL2!ؾmԵ`&MڊKׂRl;Q ISyzWL;x9^\Ȃ9Ɲ11I4VB˸:Q”[Mk~+hWטΠfN%gg j gҺ.+[ybup#GV3(+8Rp[py,lN[' Pgπdf>sl(2nGxnjgP{A,5=oV-;."j+-RWmr Dd[A#s!ňm6KLHeD/m-h xW YL h "X-b/ܨ"f8"Fq,#v)Wz9kW$_$mD 5g©U@r. xW "F1+g*Ur&3F2:/ nABq+I/Ѫlɲ (Wj2zeŌn#f{䮒0޵q#"˞=yvg׆'>lhF3GC4uid_dO$j*ź@-Ulrz5А8vIZg]  3#{!r { 0Ө&2ЊG2ca\"V<6k+K+BL cGM0kHhwջe mN}!݃^}g軇a6!zeeNڢr?5qA|~̶_6GǥyjŒBxzZ+4hl?zddW~$5>'CiPȥ:t ұ*S-!n%)D?e Vnt6yoom211{#YoP=z& A647y: )RrSr3[|^'nM]Pwlm6.mF2nՆVe֚\T^v3?rHsSDȐ!LPÒ1vns|^^y7B4@^@./Fc.(9߽9әU/M!%|9] mɃ!.Ky.|&Z+mw{s&o=BXwzCJܛhi@Fo,.[YK8mbsa36; gqtF< [I=%~;nMς#Le ܓZ>L@ SFs6#7>u\22L$SHI"3r 8D%<De87-AN%(b Gg0!dP13"Y1BX`M Il<[ZeYYbej1!w]YbKV'SeL3B={B1٥lPQ}r!5*q$@ֵIb G iO7+~QD]1_Mub>> şٺ~p` +<$$}5%_J krT}]9jvAv]aQ+2bU@/gm6c{QL8ۿ#͓tth`P᯳#׋j]œ惟҈r F$<#(K9yakRط/Aflwse-j:1=n|kf=H>'uݚ_AXX}` )&hLR}iSg3׸ HqzLѳ&q,=ZL. kn]R0'buBq97~PR\?w*jpհ> 6 =A"DG5pw8xuQ-HeɨsxEҸHTRٿ->'\|^8o?tVA!ٜG|9/x73Ib{$ɳޅoS߸dB.$&UT@AKY[B^YBOGi6Cx7_o{4i՟'҂,UH H r19d3Y$jfaʼnQI d `V; tkU !!j6ݻϣ4\IJ4FĀqXb6LU‰4I r\ȌjOadHF($1aNED)B҉dAFIAz~zj4q8dKڃFhF D{ R޷_{CNioĂ i0l PޜR}1Gȱ r aQ,5TCiC:ϴZ=x=\Pz*a31UC`26zu'#RX2l6樦S#l=F66VQZacm2-nXnI@9J RΕ=:iFq VjOq--DAF &mQ]MNA1n퍡C)bPDxPD_Uje Q.ӾH!11$@ DWNvns{cfEq8vhayQhnX-&j.TKC5Guh֦Ibʫ@W刔;Ae?~ YoNX8KfU9mfLP2D4@ ;}-oW>h-;!A6r07Dvϛj8F?[jf32NzhCxncHdJo>]qso8ĢET a| +gNFw:jxL>xe/fGۅquߟ覸:}jo[^I{B, bMdY2-'K2tpj޽-޻8-s!dM9)z:Υک텢[x׃/,r3PqR'ӭ` ښ8i?^-|ϊB&߿//Ų7n+SUiaM2̲~,ȂsϓF7f_IJhT˖$; r}uh7GS- vнmxڢv bn1$; bҮ'qN=|@pSL{'-=;8l"Bf(*4^mB@)!&Ά)OX=W{"1l=ӈ`׎T:yϹj aA^8s~sIqWELpEq?ۛ-" З/ӓP2%C@D`n0})bGOd% Kxw8rxHA{ћ4^`4G[NTAPoq|c !dO!'@† >>X=K!|` ϧ'%i셫㨌g=}tYdПs*oZsou A^{,hȚMHc-筸FA}ނOcnVnlMTi 0bam22!de8- ePg~k[m^Qp\ BsCm^ wA?v'O8fN9@' B&C3+.dnkBL9XG_^,unX,fhde5VZYcZA'C,,D,3@QFDZfuDESx"B$h;"% +vrN' uVMk/)c۪ctr4n -2RNsn8M,W\c$Uh*U&@.HPNE `̵l=p!\1j X/bff+D*$\*iV Sդ>]:DžEv BԯWf?sN"QGG [bK㉚U%nx[0.䜸]4[gˠ d6%KPPFyfTÒ1vz(m"/!kËg4*Q<Pv=[7ڝd:qec@g?1p߬fo^ڒ\}*Cia$ϻ $xdI.O4rĞg4ٓtttt]r0g@e8 JS`u0ʰH b+T*li $ ? (,tڢK%EP^-9!iHRBKbJ4(S:S*U(Ur IC>vk=J)PT )L Tr"!T`9 ̺ZZٽ| ]O ?󆲀 wAhM$@.F6v7>>C|s3zmj4~z= |_H}5 dV|yx+ꗉa2{6"R`|Օپz{Ny}e/Lu'9ܯy8-hZs _b#FPH ʠyRR92뒚a}yHZyz;p PT Dth$$QibÒ1N>pO^fW {QD/%u+oۤ7x=\Pz*a3EԆ 1t[wxX89EԢ{lԢX=r[r]3|,dq֠iP1ç)qQp kQ8.1. &B86q'zsꃍv{fX`{;b}A@Rd= /nedSMIN<$J_d+ (%sʨB-!S;!|[aTJjO&/9h 'w$J* L2oXǹb^D ix%Ɋd yNL1K!8O.awt)#..2LJ2p&Hc.v8q%bhh\w?N=8"%TP^s;i]hiœ|?oL> RX [ZP*J?jJ#9)FKnO"-&m>rU)uz{Xuޒ(SiztedRGUFMBUt pgL[\-+&~,3#-\SMuk-%=j1uIݦ>Sئo Mʦ:7kݻ{7թ9wĠt*1ʻm?trMH_ػM`!?nS@j⢨fFJ}6E4ТIg5"f]`Ɔ*7puJ:\Wȵv- NGQߔGa>U+o~\JB2T* p"2= DZѫ"njXI3 e;\@s~}߇ =*"~tn !;3Q|+*D7$EAb0{RdA"*[zm}ٖwmKu6A9\Lapj"X&o l>6V*(pjTESV" $4QiF/(HEr0tth(,b|.;;IQk[+GGa%?!jj+YM8~AqزcMaVF 3@E?Vi1ߜmM.HwLӛƈFILU(:7t#s93(9IHMϞ>\^}N Z`s=ڰݺ_?{u'r!9cDmvNa$ovj#.'o LOo jwfV؛n_lKzuw&m?5xwsꜝu[](W|&M!oOWiut˳6\ 3{IEIԆ?VOJ'_:NB/?0x ]2|||?p:ypQx<>32s(f#σ'buFg f/=\Wgw+6 V}"-gsJlh|S 0dϩf7`:Sym26h{Q0jօiN.^ #-Hi&= zoJf7Aq zv&2Hr/7{Q?{!kYzeϞWo[Ьq^B/'o82F}o'|dB ̉ i_hxH3R3Qp t>O*(ڼڟ7oÁF֭Xq'%`" | рeDG7^0bz+ Ӄ,2-%w6HPi?iDJxӀFra['C[*-0FVn e뜌jyiIhcuvPZ&y9>0;'ֱM;D7m)N(H%(>dwf)c o}ii #)*9@%VB^a-kcA&sTH 3^}XEtCGq_!__GϫB){^ӃQ ! Q3Z>wkLw$yk}c;ZlbѠaN$ nTs+nUc?lNz+}hshSa$Jk&=?τ1$J0D B('sܿvVʁdQ!@g 5OL+J[D#DrĠ̔jēACr-7aU.}n@MjlT)izOvsf,@qMYUE?]KY<$XMh KR[m[A܄̥+01xJ; okl%,K,K>]]_ 1[,V~"?X=5CQh·ˬzx'F$1ҷS~)3t—]@q—mKť/bLURQ`L=` *)E 4? *#*LKXRWvAi/-7rjEW#eFC9:uVLVc2rַ4E, %\LkBs\K Z+Q6`\*LZG/ֆҏBM$j ɋ5VY 9-#8 FIJ SA;@l*plUWׄ\Fhh!Hkȡ|^km}ˢ7*%| *en`=fݪ]!unwqާooWrOR%)3?lAzaOAY, Aa4Vl"^C*PTC*Ӂz(V@tt 5^k8@yHZ6fHxyk/V̹I3XZl}W;ύsNL@)uG^^Sk^SPg,c-;[~nvooer カ=觥l ҍ} O.lRs#aHʰ.$V{<7sB(n+G(Ą@dVPʚ-D>)C+I7tlɾ1Ҫ|s1X-4?g~ shIsԨιLǞT$yV6,ɜùV>Z2R}P(PtA*b'׼-RE)1JCDDLJJG6m\z!MsoTHӬoL;=iLG&rFsP8Im3'Am+*pNjT@aYy%W?ܳ5 ִ7%>!RRnܤJЪh7D/]tU ,kY|&|zwMj ݤmև4hF/6λl9h*CV8%+[WW/W?YO°ȂyFrEr\/b^|c"~1R\cR~T6%? %߫441+ڞ /OP_|DkYd3NOhdNpT|e^&!#\DF8Fu 2%6-mq ۫2&Bql(hHx 䚝ɲ6 4,ӏfX@b>b\˻^J,\H1ġ2{*2a#t:/.ܭ@];(h :XԡңN7c)J/QE'_XxhTIPw՜5G đd+Ա4uCi1Jr݋L=j>#R#tN_'țL o8I%}8:\J0DU~H1, FEIZeR#kH4;2DwSܮf9?0 ?\g2WfF"{֋#Hӷڊ"zE(.SPEp 5#24/tHk'@Y܇֋ ⿓I;+ׁ!N^d( \$y?F#8 ŜN-Dw60 L𛃄f4wawVk#>N lM{k%g|bWW,jbT_ĸx 1ģ3lw L1{];W05e뚡hSw!Lx@t\>\T +X(G%΁{ߛ&KVJ?]9~9| ³̑32ӛ_ե>"iQ%o~y%ӗ>.N?&n /afWLmt`"HI\ɈX=I͘oeڤ"ɤaxRhzLZm s }EBѢ09t 7m ZܘI(۴#tC inl\+NhS1A)5Xnӌ& Z^h$\0+LтD KS\: )@ A[?ٻmlW YeNݻ)KEQ$56mJd6H2L=C9o7ǁڝͥRr8xo FSF>!|I(Z;>ʋMcYf<ԫC΁gft3l`B۟i]5_׻jʻ3%?cCf6Oudb9K:^jr@j^Qϰ+df}EnczufrU"ԒrmnSA55}~D>E$ gX:cCD“^^!x1{UP.djYσ"vۨH,3!iL*Qjᒣ7 ]5l$*$dZK:uɫiz1jKZm+\"k0Yg]΂Kfs } 1fey/Ϩ@ =_TR!,7Z3s yF]=mf(2D)Di HsPl}:A,[$>)9TE$EΩPC E(I `[SfZ ±WZ=mDDHwA[- e p'W!nUiV^=ʞV!u7r [tcE1۹)>iro 5į`F?v<'ŲO~ Z,ߕM(B3+a^0oo+!16^U8tcŠ8h$b{ed"PJaA㻉8쐿*@ *; P(00DLY siij':1Z)lNYq@j)j;!Rh E g^\gDJ`P6154QJfS(5&,)qn 2ݪz@s֨yڨ<%YC %aY&Bsn8Bf2U(~P =Gҽfq_˛fǀ\x;} n}k +B;=xr Ņ6QoumUNͷ5&cA]{E7d>{Ճy1ѳ9jK6QlP]MOcv5wRpna]}G}C49qQ p!m'Jz5P V?5( ѩ X&viMk!|j&TTm&Հ#>1aXUD5ւ^;}|?.&d&dݚ| Wv^szʗsǨ4@s C[orA@ǐ!l[|X6 ە-]|yoqcz׏rZc/e0 Z!/=wۭ~u4J AaѠx쨃x JRam967_[{; +OI) ׌&88|$l n- C'O%Dljb5-(N< S)Ե6ɺ.sr)\ao?ӟϞap`e=Yz\d!DlJ}$w t"Q˻0"mtݢKyC[ MM1d}ѭ#R [~חP RGfxԏOw^:>YO?u-)_jF<^0ÂBܯN5GFou$j||/LO!2GTvre8QCʈ_zVLI>6(x(Bb:G nw1r;>r>5L0޺;UgxxWJg&O6l*)0BNuv45uqRr.!@[DǁXCDDT[WD"2ah U@A40/M wamh "D@_h=89:]üsA8pA+Ana]=Dd?(TM|cw{}$R0H-c2Ry1`'ecf*x&3}&=Sy8jizuu|<kksShKR zj젂iVWj1ڢQ }q*U|VxjhHIa]]ٿDYBWqq޵X)Eߊ$d;;9=Ni T]N;}zNAu^w9eM@f-LV=pJ" ; yi܎EByBZOjbB1!W9 .]6^Gw+ ,̩(G-8?wspk$}S{ 71Y-X|̃, ^SU$6;NG hG-[{ݹVB `ReIJ Sb׵>gL(R`E>o3" *ӪwPQA3i*g pF rBAi/$2Ŗկn!55Ē-bfZ`6k L:T=9ڠ:k#y #jH{vgWtHJ)OP=G)[6fWBʓ| ODgxá4'v=2C^tgkG}h vs쭝]`>LYGR1Fzݬ#/"DvsQZs) 7z+\j춚۱okH!jsQc'lWAH/{iWqM5SѺnW*ڰWnQ67jnN=xm;қw~Pֆr)dϻau8w tbQǻmөҳkޭ y&eS ~ݨ-ӉGv­S_ʥ@ֆrmlSfXZ7$:0!"I\yIG#dJVv8P?™6Xg%)cdIş 2`WP( qX7?Q@~3H1xEH zIS;PINN3a9.0& :>U0J(Y = ꐑVhhEvk}`}VZ]vv!_t+0;֚(oՋf℁Sl%8/l bl?e\4oݖ/=K PUS0HeoviP+9}R;"/;bDsZmn]dZO%ҟWiu FpNثt$5ɭ&?Iu,Yԝ;ߢlf6K$4B*K"F)i ȕtGb2AXJei$LnLu$ƣ_|(߮G(zT'@vJלa,Z~2W,"(!?#CCFAmnRX!f?$Yj89J0%db)TJ:WθXgjMS( RvJ@!@QX1-$F;cRzo\lan908f+|.?˫G*9jk?7A [tcE1۹)>iro;`۱Ųߛ&ů/;SX+Y>C3+^R!m>;NF0ĥ"8V ʐG"G˺ĭe#zj>ތKU7㧇OFMV[6)L_'#FjxJ]7̮ 9yhJrpeg3f$eqU!JDl#*!dw=nH×[d۽~Aoݽmw2sGnlK6%I#@_hY*"$3~dfzf_4/~aIu~쭤צZm_(C uF Sxci",zQA ƽp /v}2A, y!+g41v)ub^sXI({UzZ,Igh2d}:,/%N5 ֐x1j9:X]pfխ^sیY ֮M{yeξ6 w]HGɆ^l;g>)&O&5<@$WfS~](sϋ'@cm.s!g}^J~Tez=ڴ >]癇 zA[7󷹮8I9%@rpBF=GH#6ҕ 7_g "$s ii enS\*!C(9rpWGxZ0)ҹ],$Lȋ:ZR6 $f`bML&E+ g(9*'qt-UnIAAXwٛ?Q둘a&ôU=jkcN0Sͽ; =l64U|O\HU`=t|$͋S\*wV&Aa4OËOd%Xu'I>tapD3#:F颓UA ŪOC~8ݐ;{.ww^i=;TKvx;n?/rԉ}{z<W1\OяlQVht(YTȩJW'P t]@Ut]@U9UG2 0{`Ps-@E&RifCJqrqFGJQR=Rc nݒZT5vpT%)ã[Djb Uσkl{0ɞ^d,,K:1=ku0lpoS!ݻdn9pn9[V~n΅*jDJ`Z1p W8fEs\W1<7*   9'uj52 *uY嵈dF[Ki"L8U9*2 9T1,mqsvR6A zLF80@ie ()"T#sC8ϱ!@ 2$8-MDt֨`}Ԋ7c 1QLm0`G$,wQ9열jKZP%+(\:#ghʤItPSL6\9o H@X4V͉Q8[(ns'*ϤAZ&QWwg)But4cIɉ(E'TX!$v0 3 mf5_ B@@5c%+.؁p%٣/}.7%8bF"@CؿrʙYeNGHn f K"ΐ;Mb\Fܔ d䈕]{:Zx0D 1)Q ]U] "QB4TH)?IO3)q DRq=9ffC=[uQ¥w$`"&YeEHNe\.?|xxҎ>(XŒ K:D^׷~\;f-|X~|ɪG͖K{{],!y0r ͥrMnȤ(y5ホ=o_ɭg? ޼~k&.ήlmg#7~Uȿ/.׷) !3 /([SRLa΁g9-K+oέo֛^3PMk](:)ˈroS75eM!5LSݦǥ7.Xn8OS&(D F_͍Ob&-L^/]C[S[FL`^Bo_!w#훛kH|{ׯCVOC)7|hs%ăk>=Ps4XqS ' zbH7?]#=M1-O맶ha+(L~io^_6i%qmZIꗍT4ԨxmaajUtSJ5!8dyĨf߭Cz4lOpJN/ DweO)Byz )_pp`9.bU3`\vVExs궡hK44P.3DasȌ<_0'sy/f@.K+X(@6X44$6d!rdgrF-!0,P'`΂[Q'm^ 0 Jdtn\h格 ;*sqyq]޶"ad i'$A5 -E9T>t?}Sy8u1kA9kvJsk,1c`$'E#q J&Ŧj#mb(xGRX{A=Z@cʔiq_7Sƃ%]nmgK=="(4l?ԕBo%,◟*>kt^syiF @ˤT&$yt2Sa(/z?oo O7[qVS]7ШĘN)[)S&*~YNKTZiڢ@X)h. I~}4ӮO{[(m1- .N/ .2$  I34$҄yqk$&rD&&UbkDݐeEe#i^|_26;_g _>|^Z:ב?XA:5enGę=τPf<'Dʄk䀠'ȪOa*\8q 3"( |h[&Az8j/@}6jq a T_{Wc )},doSR$^XKzqR 6W엹WpU3 E8d)\`0"cPG1N逻xѨq,-fte2-X-pX&reWދ63N xnD. EY&!'LE"Ml*soQ*;[b-+oCچN}Nj&lYaG،> X__g8> :q3<@OS(#g,XFIi:Q4L@K-.&D"*#~H!XP.xu۵ U[„K{CdAd vɶcjz 1P BcOܤN+.ʰ6$gj@"k-3Sc6; #Lȼ;k z6 [Š\M=AlDB Z "]v@FzoVs;c஽Ť17,~<~xs .mYmqo'matສ⮧ywQN]у+ZHh7)B˗n)B XNKz% IhKJ(%2M@[5nW:A֨@/ tgް8b26{^QbwbT2&KD!N;f#;cXIө΅ANd~Ka/(G!Y{;20ՙEӊ!qM^ۍ$?] uJRJ9%\<"1sd2Hhii#HTPbT̬m#|b^j?%而U= MCBv͢ɀ2B^j0a]RpXNڻRKQZ>CTTu< 7t#i$ *(1UZT*u͐++#~52R/3ZLprC*Ig&K%##?JB! g>FB |\>ª7v(MU{d_ݔiucXs(cd,ݓ1(]F ?Jj49Džaj]s)CNO(59rVW&O[_j%L2 ⪫Oit"iQNPFkvhnm?E ,p33*9nÊT9MpJrTu9M#mI:XZːn!uKZr c <8$Zy"SRqdV}CW` nS΂/P__$ {3XֆoE|߁w=7+/GÝ-xH\0 㱿>XD_ qPqvS6xuFtcM%4 V Pdg \3k?vSk]욧]X `.q縡h@-JjF-Iq`&y(ѵUTZBK $^P0:g.XCo9CCG҉B paqPp7υ5@!9W7;'Tx)F$&xx]98"J*y;Q Vi7GYR/G(bF $WjdkJz޹+$ڴhWQM"& Q[Ihl#- -УvG9ZWB+˜h%_ BQ2P#Ixd"|W i PoOe21?үi8D2 ; QA͈@( ^4Kň##Gx/{R6 ~{ɣ}RreQ8aCP !UhDV&*g K!_9+Uʑ|"!r^0dQg:W4`E'%cu WwgkZY2rˌ&G;ԴfjV/`tzHfS!^>=Y3x<~Ck,GېO1spmf\UQ&L<:/:= yCٓ L`⡶-"lpMJ2Eoh7x'[U bD3h ڳ*/4V!!qݓȳ)(x]@Gk]%}r@Qb.WkP AyM4 .Wj=ҔxA8anijL!^l7MWq6d&sL$Z*îRز|ֈB]D;tgtAGthA:6' ‰<]WdG|{oGi;: RYi߽NL`fk3L: p3 lU-'GttH&zgI'od(pKMbk>Y.nuCy__Ay9=V6< S%(guܷ:K0QWs_\{Ebx-b--n a%B\pB*;=ZㄆLSR.PmJĺHmYL8ӥK&Gh9w|ca sl:hOh:(3d-]XGqNJFqAg9;9[M6 ɾ:QŪP^n_hYFEAЋπXk.4LXP.S;}NA gcM4Ze؄v}ۨ30Ӌ}/"B Y$4m+'3T [_VB@Fivغ FrOv>I̧߫򙨆s#(q()9=s4 E‹_(AS)B C?r&#,k M*ULb.T@&f1j)iZ۳[j/QNC\SSJŊ˱b] 1uM0tZ=nעPw 89F a/`*'-uQ;g\KɣΩ_\m: 8k#0RKc(gc&xo2XJ;6VW߲$u5眩hϹTC!dB M#SL8&Ԑ-X 7%C 4 }>NAϚjA\K'A=Ԇd|5́[zsM}eNQSV-qYv] m,NE1bdKXm.~ǨnBW2+*ߜ*׻d14~3^ef1E0&׭Jn=) uc|Db'!Q3P3NaS~sCЦrڷ]i&ar} f D-Z3ۦeO.0"fq~Fʹ\&P$ЫUAo~#pRԖ @ޚ&mE+A nLwy V Ynz|2n$A,Q?" ZL$dXSyQtvHѰ!1^#a$̑66A,-sZٿ?/VQA f)eOD@)Nq {"z$Y؟Pq"iN@r('4BUyjvcR8MJ>g+*8{_k' i7I<޸rlg ;?H=sgSu9{fu7pg)/g&GgwIi"§^k:jg62?|*~x70i% S#Ҩx%EkLmQ9Z| FY|ƍ׷\Z8V0$h{޼3ΆǷqn.3& \ vt٠"d (c_pg 6@юOҀC<]~Hx/ K`1c!DZT!z9gfxuqK(r!p看Kx9CW^SC>rN.,^i܂2LVNRPsbE)RsЁdLLcԺYKt 4[j@Oa$NRҖұS :u2dpe+ZJ{bA!F#4Co::nR$㘲jմFVMm0$Y09O˨WmTAȚ&5L&ɐMm`pA0#J E,TPȶ- >8oT\sub߯3,%laS˔#t^j _)@sE&HTuȇϟ_BxgB7wĺ{ 1GfjgF?0Fm)SS]S&qVbtkoW\n -)~~?9!C|+N^O+!qft~L]MvSRuuqv1U&2cϹR[7M,~SU|uX<؇'zaQ>n|`H*18yY @3Da,)[KS,B9# &D>Bo^N;[{ݘ)]VCǖ?uꤜ݆jr̈ $QbF6hfXcI@ZI%h1%3GCN,w^`8 7TP( hS0A%1V6 J4 Ka"1iP+ `˘ 6G ;C x`إv3ù-Bqř؞] ?ܲC)G+5w2SH]k+p[nJ0RTnTLۑ<ɷWqF^uSzBՠ֚Q,Wqӳ*K RhZ&/fΕP߇"I8qq|`,·߭:-@)GDu#ATw3yfZCY t,L%WnRZ_{W[I&lﲐsgzBܾ^cUSz%8{&k~1$BiY_7T^iFa^#h.K G?4ΤhF(ymf7wEo%Ulv0h#/ϑD oǸrBò<`Ik/cdW2f0# "34x}ׅ0 2 X5c@I7w|VB;LJΰ=QictyDBPH1F. ~YTNQj"7Ƒ1 "~L%[X@*6ƶHz7,s:Joo0Eq[*} !N+K/y'$Ԁ+괔Xt[~RDHw~rNKI ,LY/!U }4/OMLqԵw z> ~yv- LI+o?]M AO ׫Zw qJ; vq:ߚХ@L #N:qzk;ky00=;vˠ`P֪=!0G*b N痱vF2Z,e4d1DC;qyuZ.FK4$\VmӬ82vF 4z]۝ v]|]l [ 3} g'# W\Ƌ]a(9gb_1x}W!4?XN@IJo1\KmHȬ"s#L="AhA!F& VoKM֢[oı1 9 | *:{I7u0ja9cmރ rAZi• -A;L\I.b}G1D5Gd}ÆԺNOy]Fq>*r1[a6=,y)+]~m;xo=8R/Sc ~x?7<=лts[OA%1"ٵOH*Wgk8K,"_?ӿgbfbl0,eâNYKr&Ȧz=kEAnM11mTݎwNotk[r&Ȧ=M Gݚbc:Mۨ2otJeo[ڰ375"6E&#EYy}Y9wؑՎ3+0PW >^[ApY7u$@JM$u_@Òe="4Ne60lB5N $(CZꠙQFAX( FI˘ V#0h"!P @k*z/v%ntdr)})* p2f8DXǬwl.?[% m r0szcN3 uB!*Twq#Jyo%VlץިaLap4/8Z\J[FR.f-h+62 i?wEt `q,4=TS\JpHg!%CY-V-J,i]JJJ3mwO["t)&tcs2ݞntR dff1\6 u0Ўp>~ G u GcgD Y/hRKXq?`kW# ׍WcY*k jA%'ypA-faf9V`Y ?$x-dhY`K5D!P_RFuاt4OgQh#`C42neɺ=ba]j )bٴ*?SR":ո̖j܆6)?&onD|ѭ)9Ӵ*1Y.JAף[ڰ376#8)nZ8$e&>.Z\WsE)$: D W*gsQGnxift]qd.;Lee)60w^~X@ Rg.- G| f><+8ގ MnSygtRc=U(UnA޵>q/\Pϣ*}SpNeǗ/N+D4VREbwKMr1ӿ~Ui-\JƶMK@>-ӎ CVyLPOaVt,Hk򷁄ժtZcͨ`BÖ7{ k%e3~8ݚn Kߎ/Pn++k`egrl󀴑ݸU0U>u-`)mx-̢.8M(T1!"S'yx†?aLvzS|śuYL&Wx S|*iVz=v[q[m 3nw%,Ix*w=-ʛ_ZЫ6 tMmBkF$qk Q%o#5F=VgZ{Df cml4XtkR%au-u2Yqu/!zǟ?I~*T;EwbWm}`A{($[ċOQlśdX.ZXFh0Be(lUhAy |-B1iDFHJrtB 9 E AJ+f,&ph"lqKJԬs&DCPX C baIKG8O+b(ɓ >29Z{5=+J-gd9i͹6"].}LK3-,a~zj[̶8 e:/D}m0eWzV+I.]eT*J1/q DRkv6ؒTk8Z?fc rD}fae7>雒O6&BJʑe !JJn-RFlt!E݄/R~;4OZ<=rfͷ~+aFC~?=nUn z^]=p~,kRt"Ni{OjJ"/[zxW+y"ٶ0ics+입v2:YQ)G]mC<Wv8=aw"gS]AA]f829 O.Y־ik[;yAo(_\aUX~Hz)eu&}wad`M5=Zi-םUWnEY?E5QiԆ &i[gy-u,Q4\O,?]6B=׌vԞkD-J ie6[Kـt){E7H4`VbA·.Е 1j Ŏw,Fqv= l#@f/"_y*'V+ ǭE1^3j>&'RZ *u6xݣgztn1uq;48\=ʄO!٬ukYy? IfHTC2;lH`$IUH)h\_y_r/m4LFdP0n6hJRu Lμ2jc E=G̓"GW}B ʴPT<ޖƇH ܞgSAd2? e J H2V0kLna IiffpHK1 o++6h5W/$1wۘJY4Vm(y)vhULvѕ&W/[+]X{m5^[K.!+1=7b>PHE _Ѥ&3It<Ͽna9$n3{t;v&Ѿgz4I7Fzi OsjA-S&Nyvd'pIZBDSL,r)ZhJQ$;ѭnZtay&|i}~f(\-0R pkg9l @6B(yqYpn9:ҙo5߼gA_ ^TRSYy_ @5LadC0 ̳K{ 8QH O.<(e 4#WJ#g$Si}w?H1p7632q/X(ml׽!9r ,*X!XRf 7Ep 4pZJ06x66d&SXƾJcJi* (0[et1i cXY-jy*g^lޖ&L -ۓD Oln즭aB=YQJEˉF!jH63C&d1/m+!E 7SPjE&-?demBk*C(r*-4 $3c12f% Uj}pepSK["U9*(Is0 ӱ J?՚ir?+K+2rTtArffZI&Ro>pEV7nrV۟N`rBffFմX{r#_&Woc)_Mwo0z]m͛]O~,H Hr :.(X+ܵ!fk1![s=`+4>SnqL Pp5%9I}0B!_w( 1'd1"w{|PygkΟ) X#*"Bxr Vqs $W #M$D>9lIV;^R/+AQ9 122iAO2(+,~mfbެwm)yRYָɰreńХwjK^#4;CSs2Jiq\ġᩢ࢕<y%f/BJkx .L.1R[O|rl+ΥHgP5h_ZL<,R]-᫒{hC`G^ꨬ\ SxlEˍ6z_B3AHshμi~bJ[(P/i(D^ڌhq=\ 2`j 0LJiZ, cޭ"J9"n/<^I,/[@3,~ۋG4xq3ŗNj1%Q5;4JbgG44/Vs{zYQb ? l[3\p=F]Vih$$5JkD];<8#@bV.$w1j|-WQJ҆s fuɝoL!;hz5pzh7u"Zbdc>fsxmX~%h<) xL+fBaa8|w䑎s7?[b8Ma#D5C#NmYX?wp 8fZqb8%E.-Egfbq:3Ǟcq2iqF'HJ$-'ct4+Mԣ {NIk;B3@wu۹1 ܋Wp/z_H(z$FpO)ro#dsO_4}G-'GТ|j&? GZl *n9DnB/ȭ »]_V&KL"/P|}.6y,.ȞuȚKrp﯎?/<54_N%jm-e;;miqEvt\26\mM+ggI7.d ӷeAA}Gx3`m|TvBBq%S\a|[!3{aA8lGsz Ov_kM !߸ɔ)s ̘Nʑ~/o-b\]_^k.ѮՃGب票 xݳ٩zWE柯޽6yY9yiJn1/nC}4n;\2}җLRe߬L)݆d5RԐHiy!nG}v{R+v{ UF"7&rt暌D+b6;/+=2C`l8j@X!d" K^d@F29+m9yM@]lGcPA\։hm$5#lfv,Ԇ =p.XI#YG ӖRT"J齐mOQSR[*ʶ,ٲ؎d5tӺR$TWJnhiVu^k{:q}I- 2߼;mSf=k4u5/些hd)x}D49fNvgkٝ}In2geݝ_q)\E;1]i]% I9&Ҟ)I=L}]>q [_BuNv:o\UkEJczxSV4rZt/'K5K1V/~p c~U$a-~# xp]WMaO[60]-j3:%|?>};)q:Bg^N) S`_گ3(̪ 4)$j 76hDYr qDP(*W+ V\>:m4Mcvsr&4,i}jKu˖Hܕ%L7ɋW:~-;U1(_ X'7 >6JNYѲ]Cpe^Dd>?}{Bh-Oɖ?XMIJ5?*\l T#SUX,Y/d<%B=K\"dw"+Li'dTԛ=QJg qrV] Ћr#n]eb/>s=gϟ6>!mv`w aGr.f,r^/9?Hn-tq ^;s~hG‡#bj.溬M(#5RyFty(>2C[#AFsA# ^5JՀ2%w L{z8m%8N]%cUlo}UVѷEAVs'"؁e,CvIJÆxl-y;baw|hn*[ؖp/w8,dLNVFf'fA>0e)ȟHt*2rC8[)H&;:o>K""[]%IxeӖ@+;NCm|2qx7 }Y~SM7]^.Oϛ]j,s:$ԴJ"RsdBsC+ !?W//8Zfnz7n}ыUJZ%"Z[KuNq)H>ƕ0SLw-.G.ٗ"`2nwk0y%L~@݅o^Qw\zN)xoH>E A+PhQt1%s5pAJTEYj);vaѵj7,񦜴i䏾=(\֭\=}6фZR?#[K_~O^} v?N?T 0hV_9/ /VLm}|gZч1Ziώfn~u8$wgpώW*kB1H!Zc< tNc+1(2JIy˖1l?j1C"ۺ6k7Z-^ip8樍XQ,R)"ǂ!'E2ꄐ98pKYIpёt~E[dj5Tt 3.Z87-)Bot SPB" !%mrljnUT RIVpZ u ^tm4ڲc #p 130+q )E*zp{5Iu 8oc,@skl,G-͋UyL 4dȨI<ȉ(^ l^A- +Ew%EeF%mhmQ$36L^V.EdBV.Qk'Pr=)TE й3p')2d땸tw49vO.gr}Y !tIcS XNW&hMHzQm b4ȩkw~6tƎ(kCldHۑ5#l#{M^;u#r cbԞiRsõLja++?Պߡ*EZP@ܷ*LKRPN =n\3?+&t\YSDq䨝M0I׹1L1LhvW3]䵝| < srk琌ጬ]&^Fh'QiBoa`񘑾4{̺:jz,#:)SBq/tu\CSʧ-Uׁ玕8TJ- ֻؑts%WJsȎF!˜ajJ L)@9-d4HQxaKi2V3F-УBNQ %WY >̲+y1{Ʉ2Y&C\L5J m%WI8  D^hGJX*3^ZD򺌎jjLDyP)Ec|ќpg`#͉ "F\e#Pʖ!!& a$!.ld!euY¯Iꭕ^xb88``ZR:fɓHd/Ae J;$JAeԦ= Dmʴf!\#iP%c0=Ғ@T7~IEj#Ot|mUwL {fgqyn63fg?_ܾ;fW^ٲ?.nJ&R@g+C1|'9,MߢV,88r>0N;f;wRLK ab-z>^*}:gG㿿{?p3iT/IK6y@2z{A%:SַZ>۞X\u)L߶[.?ΗV֟nRnַϘ1JJ[b^I;^^B@bArnti|I}ے_R}ۚ~3w?sUOVڵ:Jd}P+UK~H'e/Uݏ?&ѐ9wY yZV MmlVv'aݾ8[Gڶ}|r9)HHc zu[D{?⳥_Y/hsߟz+|ݥ,V.?ȹ}}yjm9w!ffl6ّr wJ^v$jw.oEZ$W-]KƔG@Z/\֯/"ꗡh,˞}c)niG6Tk,٨z8L0c(|t/:>B5PSK#B܁%kS<>Hw_pOvD$cv[`A>hEWN o )|TUdKتl=M]3=4z3;8˷-Aݾ;r:->ѫmvsqҷ6>S&ex-mop}I-[Jdfe BfSO.%!P+ʕ+DJ,yْ/!Y<+7gp~{]Xvi6/y6Z/]c0Ess>|&n8:̩Oϙ,KB}"9 Nx6HIa>jAe'3 m|pzɹ svnqǹur5a1^-' ί]9YşMtG~{,.ӏaMm-?hN)_ׯXJrifď&BኞaƛlgFˏ/(\p₂ܦSf, QrǞy=1JgL k%^}n//ұM|oh tS}w(W Pl-gA1ĞdOFz&hL.H/5e(K0,eşf!Vkfb/RJ;-缔r`peӇ)&B z>Z# %NXկ׭eISXC8 [Db"M\PbD/ŅZҔ$ 3,SBCn04t.iac T[.$θRtjx tO:)S7JYR)'1Jij?Q+f 8#i/J:%>uHaHJXK=hQlC`5YBqM [&c])%Xjs t>_~t0PjG .D:D 7.C?;-GNQ-l>RLqCTs]}6b"1*'*kilB!3|(!Beb:@ Є{h0Ѹduڦ#\QFO"q q -o"4хjw[[NErh*kr{Q2)Q߼G*8ZoZ},qE.2D Qz˹Y`!Ks!m mvϣ).0F`82Z4/uA14h2 gҺT[Mhӭ;Ym-90FbP}8@6XYD dlv$"ZՕs"̀r?{`DHm2 }lPlsH-|cr,~L )Qjh=gХ)MPIi,5Q`ۡQ–DSiҲCoqd+:(Ec K$vlT0D%7KӢ;yJ5>j )H:ӌ+.-l뼩켙r) Gme~ojI.;ռQ>~~H-)EZ!j)FG<Ճ[VZm;HF fH+jim96 .(xmK祐Co`l@pMd1&8oX8DYd+Q jZN7Rp&51@NH Rq5-l鼬<堡IMbPV,F~;|{J֍R,&qi\I Uē} <[t7:@gDY-1:,I<*=f\+0gW xN:Oz3?xzf`+i13 rdDC,p`w%>.WO2V}o j=x'jKuiݥky'K3tA)E53ty'mXIP$J8;9cN Jj58iX7>AG L3D4gtOm:zԅRx[ǧ6zc5vTI5NS6 a_/.7UCxٮhY7Yڎ-ŝ&}`X!<~leZ=sL7 +H!d6'd`Y2l^ pwƞ5: ˶|S7J &;PDn?zWQk x簱 x"× Unyw,K$Brq6Zq1 PJJ2-ziB'NX(ZnűY"lOs`f#@Nqr,P fnbe4j΃>SM [Sxyn {*`AɒsKyANM# (IKI4Wղj9y[e ;Y ):I %7dcH'lVxEiKYET7Z, "Kj :QNcHSMIj"xslJOuN0nq'9gCod#))R%%~4jOcrBJ1}jZ+cw,aTmNV{+k 2D\Ɯ/1RJ,HM?B>:d=^Rp9C3&mQCr}NYJ &$Z)SbE~Vk#)= ?l4I[!\дnFJB`FȌU=b_@yUchh} ySEn=e#Q5fR̊<+ dj{춐;=Ua*UREpﭷND{Rw}_~ҡ}~Y[nvtN&/c'֥wĪk K3!wBw*c0.[د'! qqJ8D>kySmxvg4  xiT/H~f4">]]jԸ޽9+bߊLΤ{ŮJPYzz-Z4h7nKVڏ<jQ'Yhzޞ# j2tڮǣ0w[UGwrIhԕ3Ƙx<[_; [ Fek|rA90#x_EqkƈW E#KBs9 'O Ӫehe R@l$:.hDs7\N1t((>}qaD7&K4v;e@ ۚ@z({aU픷_=BIa,pwao ,Cm`tٱdߗlɞ,٤Em'hf!/UOTI0b\qdVPHYq[N;댅^ BF,q2 E)  w(r̈́3!T[D-Gss.G $ќC3(ciUw"ym[vYɆ9Q8ҁ|OsJ@)u`P~^(u`MhQ}JRPEF5gժ~2%&z #sf1/U1e2#*V ' 9^jL {ytƢ6A9VNU-[#x*c̓Py\[+TPV~gl}_#㥖~rI]SZѐ85JdjWbr'ʅ#vj&蝤x&)0^Tñ$ŵ  ERl;3ڐ߆_[Yy1t\4 |6v!|#/v%皾OEj@HnAJ {)]I]kv^{ṣxa)%/1uMdKŨ :pc&)w=-ͪ3.Qn&F(jFPM=ċuHF|Iܑ윐{xY&~?:)eVeT8CĭC;n;TڸyҞx^YC[Z[;AH*`fQ^42-3pl`29-`poVߴZ% M Pi4c@Ѽw7-1C{B^ma꽚+$_ˢ| f߻j.pYk?i)G7a7L뻴?o32ܝ¥&!+N^iબ`.lT3*rImaDAWF>K^qBWǝT-U@7CMʫH;[ s{V&tYJDpѕP#etDH/\h2_z?n0?x,,Cn.4sR!F1L{wS07\rp@ OyϕT+UqۨBę&Ŝ  warIԔܩDv$9l;k[@6 7J%OgCOз-˜.uw0J NrT{;x%FSOoMtñQRU~O[cԆ0}SR Ʋ2?38z͗!yAwQn"  ]lzՒI0nr" Mԡz(+5NGw;.,c^WL,˺u C.%iat]BtQ[}S'Ql]:q_Q xT29L4JLˑ: Q]c#IL1>.N\hY{Q$9LQSRF!q=c;zL H'@fr\I@{»q<g_g-/:KY/C~W͏?wVҿ/VEO;ïʌ DnLuM}cbXm®XF͑h I4e;W0ek >1[ -"VUTXeUF{-1RHυ+XHe7 SY!+4f\+b,Ʒg@Ρ~IsDSDž J{ 4cPj$*f((P䮆<0#:+Qr;(Cz)8iXCgH@#(1]lDԒasJ̠[ޔC m;3MSgbDֽk9; 77X+]s?%+x^c %)sL(КA3jӏR|4مy &En}bRh0qGziQt!Ts@"/PgҚ"#eVI[rrx JWp"4BXC[*3R1Ů|9#KŠ/``bgL/ xR1M8p3Y΃@i&mL=ja /j2) эİPp}I?If8PD 6'BQ]'=̉аS#0bt*PV2*+ʼ*Ɓ!- cTqAC#$|JXZ JZ$\!:5kP`SQҘkp.ԦhTz->r.Da3"Fr!jI] THW(v9F3nbybxI}iPJ*f,"*A,z迅=c~~&WFKB 9AA|CVӾF|O|\g˯3bSkԻͭu<0f. )kyjݧ*_\/~u{gʟ.W۳9_kK_.lV.ζƏf;x5n)FɀQk"Uu&o\= yH+۪Yw>}K.VEx|wp_#fx諻z(ױ8d y9rC6?8uuWaOnl1󻫅uݭ\7s}.BEh[J DyqBLM.ga@u?B~M68o] F}/[My4Nľ`Y >iUw$hvb R.kX u]&}:0! tx*0 u \Аi+{QϮGβ[qNYRw;!ܪA|!;!";!s`}G)bV1t9;+p<,%/0 ^0?.cD%+'(Qfk"ĺ_y($. D~WܹoyzVޯݩڷSko{h3 ㆋ7u$)&/]᫿,>]?ødM/2k7{FJ_I"5.]Q}+IyQt8Ԃ$S[kf@?ɂR9K!Y27p:q:߼"krgb@׻lzhEÑ nLӶJ& Q89dŪf*9\V6:XчU{:OaοtV mэEd eQ 9wDt`N?nV8Li Lh9qMVlB/8 J~VByC9}lxj2Y7u}!iOٯߴS2+˶_i MBmL7ߴP"Rbe&qfL)!J]4N\)Z'n 83@n}G7hR`s3xCM1)!WDl.s#3nr1ʴ Wda v9g |7%wꬸ14PD1a):2^QKTfP,, 4rEF[0dLB:lsRPēCQk8{ޣsFKw9x4M7&(಑Z49TIB&@zp]O*QЗQBNe8)Swwjx)DUb75vM@pH=3ּ `^.gcb0;?xV g[vET>&ŝoٷYύI?5t+3 !@A>Re#6 r6TAWC)=CRz*2ebܮ^6ݚ'QTkK3v1.Λ,v~\ |.#Cxxᝄz-h4%]g}8Fts;^ÕrBQYS,ϙd*WY7{5B5FƊ$Cܛscmo) T k5>|zX 5WC?9q.b'O!1`=UL1&4$׊B%QsjKoGA&g|jQOM;?( xKOStSkMιn<'1Jr΄Jd-ZOp-69\OW$\YL%#bʑ&yCtD^BxjRih6#*)Wp\'ʜu QaaIQ)( hGQg@gpj<8}'&|LBYE|F)Nӛ`U#;nqsbev[h_]еZG}?O詭dz|Б{Xa # ejLVFz;]al*_-3 (5a(7Yɝɮq=vP7E#%Z 1jvM,72|$XF $JaYΞ6)my;_aLlFoOjS td^֕fb2D.{ RK5 4:jtah0"n{hqPk ͝U^Z(_C1JRalS\LDRXe(!Q>G2E :8{ \iE]ndT˒L1BC$kC)Y&erjR)0vUr;+^b'+Ϳ+BDp*:v̂vVi3%ϩӜ(ԣS <ɥp$Z@J !yqĭTXq0sipb{ p4L&PLc`ڰxR܈ݲFSG$n$Or0(Jqj%V SNrWkmWq"_ 9@R1 L2mWګS hxQ[/ `DB!X֡2 (&;ٕ!N2Jyu?Էq'Z2 ݡucP8d Jc~3=p%6!"AhJPDjfy([(>ʠ:4;BU5ƐkpX#܀Rr ?Pԣ AsL1=^%Mip4gOhdM|M WO:õl%ж)`*&鷥):UE| LEECdY_mzAc~*O˚hVxMůl! 0Yk!#Dwjiif+իzNZ!ӣ앞>ArwZv|x?FCQKt^Z<;&R2θr#蘄 }-ɴ\=o?oVa|nCM>NLf׫M{ F 3$jU%)A:@I׬R) g5ֻh5cy8fH+ezhzZqAŦF-5P V.軗{ܗ?WzP^VsW}]@gT310]zJ\Ii׽Z>\]H6\3\y!nezx 1*zpz%%W}vIJw.@ #9*==6ٕR(2$v0Wrwv]̓\SL|=9rȐƤh|}ϟ}|ѮnŊNolv ͡N116ø%J@L's;NNdPvS nuhr ^]I܀' q#NR͑9nt l`s EG&#DURtN2fᙀsV8-`@\CbYFhj t?SyDq~>R䰸iK^7Me:5ףPʘ A B͎wP?J(dW ьBUW\i&fbMd5IFw3i\;ˢs >yre&ܕCߋr)J%͗B Ν&-anFs@M!zKC̪5+Z;Wa1gy}sC0[Nb x|f֛ۣcW\sa]]/z{*-hա>cMƫ4]E ͅ/F^}9zlQ2֠ `!K/&5M0S'{;0 Fy"#@Hf-C=a-h oS>,Eۧ!Soxۤѡ\b& 1Yw*lrSZ^y<ӻj}+)}Ҏl~~DgRʩ` )N./&ziS :5&G`MY.3#Yj{;pr1xPiɯ~/**em7ϣ))XɴRVŚ#}~uDaLכ{_@c!j'V0φ ..|ǭhM{{j[TB@rȬ40[Y9rCbQ@JfB(is13 Iv-0cJx]prA`|Jk#j@ .'ڂ*RT2y¬MD*yW5 `TBi5ՠ!뮭ЂHxVLaɕJDZ$64a{ 7f$]-XQE^=-kŘdl!.0EE?EE2(98RšZlOAi #[fNW,dJ5x`͢ $e{&^,`kާ΁&΁ Z(-g(<]n&BG-" u'VrG@wf/h@V@L&FGVu4JxB݋Y vy6xbߓYX ۿ\Ms"ADj E S"nn|.0tb[`TRn;0ƕ\~;EJΥjnw^z u Uxa-`G\;3Q@Um:)G5yULtzHW%U$\0DtKP.[!oE|b7IĺRm:mTn薬hltC޸&T钆f;эR)[ b\'*rEdEc[]t˧"9(Cxg0"αNNyCsX^> l-MYMo}l6LΣpnm(5+W}{L7R>>Nr xaw yxt8 TA׈hnVoGenkOTF#4zXvSl,û ڻgppи<瓻[xR.87B5ͤ ^ʩ˟fO@L##HI2dǠYwfL33tI(h}$4)X1|A#ԥ:@ *aIG.qqA5* 98_ơn6kta'\V##fZL ur0 W*'.19C:Eܥ2L7}hf{ *̈OԚ>ޑ(xslgb|Rh*ZXO"&b3\o ʒ'7PE8wSbB\S]0)RX'FT J)6j;,HI`5b9I1gaCB (E- 9\J!v\AFUTC S|D3[!(Ⰲ=/>֘?,F`LY8aj"#&+P\Rc)̹9?Nj|#Hֹ6J['PT2L zl#"Щ5.*.@/l((D=u61P1a .NQ(xRL+1kO*D S+[:]Z -j`A;`Q/F-++! )\~09a_r|o2;SZe|:3l̡ aֳ0^qr><0^\~D4sк&?u =$J!RPZ<9Ԛ<\m$B)ɡ4Jxfk2K-$< e! ӆ: !H0[,F9J ύR RK&vexa)@eZO/,Rܿp29,5za: RApmFy bxSۨRh~x1Vh0D!oE|Ώ; d[ b\'*ryt薬hltC޸&TffM.y:QKۨt,X{[Gk7[>"} CʺE5='qx"eyD1"d0)ga&)R Q,T. &Č1ǘ0R~pɍB1IT0ʩWX!rz5RO\ϷQ1n2yf/.>LcF3Zadn2c"(,5~f>[#/] t#[1hcUeRQ;UFL.xsurqBp$es F!̍_Ny\zi*q6{>/"_>LmiGGPi?~ ]f?Bw-_!nƒ%H_V"ťJU"XA**)X{3[s5UVAԧVZc}t8a R}~uMN|.t$;ܱﴎ1aZF F8jeՌ`97j~Yv|Q+Zfن30~E˴rYƴp > *5X+fRCp:~e.>]o0 _l3I.E9_&w]oHr!=o=a'F)3l5  G x)40W🹏P F(ꇍ~=)w _>8C8w!v!!su@Ec>Uf^: Mm%dTܸ_h*d}XQm\<حru5 v|v!%ҹK: \2I4ҽr~RC*[FyjjY=.$Vh=P) ]~L0|1Ξ$9Y<+U]ԡ(%^KLT֠Jwh4dm:;7}Z^ݳOVkWߣ>~X/G]}~RP@ЬxŻ+>'Yyonz`:/ai8>NP0/_/= AHHn!7Ϳ[yC =r7F#qx8JwMw57?ƾż^!Mv ~s}0\A7yF&EMzÝ74˗c6RS*UI=|*c0fq}@ 6rXuEٕS-ֵ4KҸb[!K+&kӽ( -Q5}hrQ/oO3tn*ͺng8GΰxxP Q SѹOr v 3kh:k"t khfk.vq.vrzck50w#o, [`Ԗw{ݹA+R q nFۨ^V|3bIZ7VѩhX JM]r>vטc,:@?(?Ew-8V"# .s0KA`9\=˔4_Hpo4VAVW(YUl0e,)a5 Xt}qшYȷ%{tōVlڌpm&5Tai[5ͽUZ 7ad.V@r9(` %-v͡-z1 w)hy2DI} 0Nqi&s;#e}-Y%WU}.c8bA6ZDa<NSVcH%w;D9k몙T"L 6aFqh-!l1d)ňA \h+S32@0M^alWb$U!cɮ!"", H >2m>1G qdW#==RxqdW~"%^*B XO N59"^>yDpz(CIQߝVVPF>k|1(n 7|lE3Nﳉ^!OQN4`4u|4,a>B כaon&' Ot^~ifūd8~O1rumEOzA9,B}dǠY!2d L2`Q mN7/͞%27m@qX@ڧ>Jn>;xO>?ec2Le* K+u8_Wj$۰xI;cRVp)t+?Od%Ih4ΐ,#L'2Pl!˕]1 lOZ'-`>p)UӛUf=PF@ K:75xE" DKUHU kߪF( \}+cyI.ݠkDugB"U qq q%.Gtb-LX5 f>m܀𿰘vzk߆FpE^駩F2/?cˏe\L0\d$5UghVs2xNYvn6m>܁?zqqπ8@KmX^rּe^ٛ?RMnlgLj!&lA0PJ _ȱтCsD!Cen JF2 t𦆪&5Mߟ7Lw:$٘&P"DFh+H+|G9,.d6/Y`{m$;;8Gh[[Pk`Ub0i%ql+W\T\]fX6|-N(`Pe= ԑo'7%3 R/7ؘ&qڞDItޏ7aȑ٦$%w4c[@/-g9q/{]xRKT~Xma98çwוt4oL7SD.wOD (n ə~ l[ wYko%px%SeuoU!zjUkBO*3J%b.m{ѭXLF-m{J!l{m{̏k<He+WXoe/SI,ⵗoun9zdž 5˜p8DusD9.tzIq,$ 3"2H9Ħ.~˨ՀlDR|,x:wj r͙C?}Ni'SMdSUq66YebPF%%,JSb=-plb̿i6[[%ɀ=Z,m8{S,V"}wTXGM%Pjɦ)J#84 3w2C8y,3S ÉGcp$DRq,8R<PblP$q Uă>YaSP׽¾Dwގ!GCTTcFT/-~S;J)xCTT RzRJRbgSJQ=SMyG8] Rzqj2{)8)e{r㬔J/GL^J/ZJ'7J"R=R^J/ZJ8 )RG5樗˖RSTR;1 R3w"R;BR!e7yO#TT˖RBR )%OJXp)ԥDG[%ū=Wn^RTыoRO~<@jhȉ>yV PP23YhCtT:GV߽.xD GО_do]+Cu<:9}3=*k(I&7(kiMR LUjy !ϖޕ lK-Br 3l[TEz&T Ƴ" MCelʳ򞪈b4nyM8Hh)9}2PvIO][[~aț=ۖc }riZ)+7}-C1(5FiSl]r}RX)޺/̎4he: B4޳ N'Y5=ӂ'^5=9@ˡ')4?) &4ގw;ttQG;2b=_ gZ1mWFAĥ t) rGeU'WI?M#/A( 6c:Rl HI{8ςoa(gcf\VS,{L`17 <8:=4[NH =x`*X7?@5SлTA)k鸇`V a /4oU]PV*pv?g<uVn5#D+4=9]ȝGx#-\cxRA蟃uD77ex(x^h"בݠ#֍-wX+Vؔ PGըS^,#ZGnqRTu x} f}7 0 fOHiN4]m9]t} yYfP!fKb[ם}(^h>n(x<'7^jXL5V{w#_˥ڪ(?UGj̖ZfYqwšrVZt,ķƅGd jӛ  Zoͨ>4t@ٹXSQ֟rj?n1 gju6 xNFlع+Ĕ  IFx_&qZ#:gWqRO+hĊgdkhax>9bJز [R#jJOit НhҳhL=EgImEM9zrT)Y/,28 . Q-S,Ag2FzsSFtz{˟'6@5,:P3FDhS{TkiHsFkvBU;ԓ;tD8F|KV*Li݌rS"W|7iwGgA5:v3L\6lE.32`'O6ۭ^J.ͳz9_nä_2sCܧ(F4injnoCK[uw]F,~{a'rzr~{ Y5͚Y,&K۹Y<#0ZFǍ bWT<=e>#.8:7&n ٳ߽'g )^p;HOOfzbUXhbKl<q1 YżYUڍkvk\-tro,~?o) jKz >-ɍRME6 o9X*1 ~IZ]yeʴ;`]>b!H39A!SkJ$jx[ܻxI!w3&A4z{HSɝBwCz `o_P{ & =xkl-=iH=[#0n{v$ JS~J`Y}IX 灵64 + G' }}05f3hCL ՃIZ7@z>8ގiFiwU^Fp2ypК콋A$T ޫh;Sωe3bϴD4v|w/ȱOwxiDx$bL""D2hTyH1ђ G B}RLoTiI059$PZrfJbyIpuN0U),vۙ9=al1%d~+fÑa½vݯl y٫>Nn_ |\y:<=cI8gD$cR̯`ao$1RW~~|4+zŕ+\ًB0W?eKW-|)FcW--,N4^9D8O5d+xC^]G8 uAJS=S X]bRAsJi 1&; %\ $5!7AbC&H"lvbcbp$`켈 : BHk10Վ2(W5Jc9spu1|ſwݦA7wOU6m#eO!McA'4 b:Ug CI#E(Dמּ0H>ӸT(iRQYӧ{<~-Woxp6N'ˉ^ K潗 C\RQ c95Dr_Gl~/GO{T us׉r<1L;hc krO,1BQX2ĊH@%D'R# 0ÁA&(P2 f gWA,C2"܋QD1SHDknf- THHXXh1ȶ;/L1-Yyb d#€S C&wmǒ/Eq&ȣMaP2^%%'A]R%K)R:jZٝ RlNNJhV4 "b(Jb(zD(VjyyX 1pv$#8UW{6neEa?l;`'C?7t&gY4<q2M糋, fV)@ |Ͼ2sl@N+򒃍#L`]Z.W}Ev;'ӒW/ %-;Nz L;.Uz,-um+v TAvU|kf$qw??2e|:yA6 %!=8.MlGaSu<Ӵ^@4:,R{6 1[7 ת.Ԗy9{u7})GS+Uf:A9SG|XYu"/Zn΋j8g]x_}֊֯}׊RubMw{ևv2KTJ kV@o!񪠭ý8#LNLBok;fև&Muh n:΄6kQ]n8-+B7H!AUմpH|0!wAv//{X9rv %ca+lj\gMvXXQ|94&|MleгeEwVys˵Rky΋>*e@v=ΰ,3.9d3W@o lV@DPZU&pzy~q0?(*V rYª'>5u;Iȵ>mx>A\A9޾i&=A.pCqr"Ⱦ 5pJ)WO{|@)O B#yw)wvsgwƻRs;iۻ\.h]u\NF#t8*`Ѣ4eɿRnl{)j{ zDF2QVSbҠUٷzu|$ɡܤ$3Sb=N>wO,{6 91Wx)j?uTixU|WYx*~GI6QQ'7Ҏ E_4[2X\݅l<, bW*V'݈۩ݻ: =Z2Bi69*a3,cq_ ҩ [Xhaz</y:?l??DJjux{aY0[nFF9Vu'Qw0 ?_YԕN5elH=)(ɚu^pU\$RT:%'z ]mi!z0_!- e8뛙?b+kY j0lY ` 5-D9lGf- /_{Ox-u;Lo>]G@7Xe^*ř{,qp't&'ɟ0[$˷=FvxxĚ#1T$+äxl7\I݋:5]Oi؎5WZy$ |#$=)GJJ|`i< (ٛ3Eٹ]y<̺ѻ ph<;_!>E}FHZ ONIp!M< E1J5 S2?OEL3QӊϓdU u i`;s+G4+2){c﵉޸/v)C7o~,[P.> ~v0JKItwU4b |}Rj2 NiNC1}TzmCDYpn "eLB.aD/Vw#$1ZX0DwlWUv6VQ;K?nI[dJK1gK9LɌ21%-9cdl^CP6pOQCn`` UL0$0+İ&\Hr=mLE4cŠ*o89Z?#I̗E`4Ex¾,'9TIkz2JcR*ia iDfn{Ʊ$ ZVϳ3'qo{71~g7^~%G3Jjܖ:2 ⁊B@А<")$!R A`1c3! E$jWj05?NzF.ݾZCB'w uiImND )52tZd-S#e:EI^ iP@ #P$ +!bP>0qd >WZ ܀MZ5Xo`T%1T=B`X~BaP ',GFj$P6 8V.7wz-;MZ6>_Ԩ.ĥ/.a 3[P`.˛/=Pb,"WwF2їq?aE "wy^v@s|8E==IWw{IxO"1Lv~==Md0(U% Lr_cxض'9hD!}LXm@F;)=|Gz mlQvl:Wϯ^\ɻDJ~(Ŝ9@ kd5k_ˊ5BnO⻮(Iܝ\{m:lNJN`J*kS]js5߀ܴGQ涝]X!p !+(햬E Fa6֞ B6IgdN;B]xNf?9IqV;d{c?#ϽIՅǾ.~Nrq:,\?u:I2arM \!;OL|LHEt,:|*xJ_P6YQJbxx;v7cWAyFLuGi1=KFXa r\ ]՗HXnC!R`EK4"3m"ʵ RD$󵑁$7Ev1Eo&m#>[ ǚA t0/u< Qa5Aw[;BrߡhOY@\ַ\K[> l@dQ&" #Sm@Z+)tI^8܏Vp?孒Txymm52|7=0,J2g=FhDI:)|+f[ y>kB&~ ;dgHM:k )t?~ʆ-kφML4EWAL=fT^ L͵}DTmfs51OQo&.r<фzp8t.#|ٷ&c̈-` leZի0>~&5xNq˥?wo_n}J[LuE<z9Yg;#L¯d!/ xYl8|\* a?%t 0pqǴHt9uAw`u=2Y|n&lqSgd4I֯3>3@Ls_ֿߢSlY-A㼡𥫀 M hgmKzBW2gnXIk~BnA#."YŕF}b9bt6oZ[μ#%?{ⱋZe|,'k\yV x=J&IC(2J?|kKS[>͒;w#֚ Mk SXMPQ+y SAY/zX!&$,\o= nD +"6aɕ,|P T_#eس,mɶ!`Di;nr QjڍE Ҝjl 7ØN) ٻ8n+WX|&=nLUjkWiKDp$93D6d;cIsppsЮN$X8ly,CMZuXâ֗*&5UL0s[l;S|M7~+~ח{~Zϛa~)RY0]F@f-e `.qEZ<Z~FBxX֝{n]Q9!R4V* !gdpFIx3eEbK^eH,cEP$#NHiCjT]bB>Z{ZQQ )g]1Ź˜Z[0SU]!to^)ohq7*{T"o ϼW[YF'-O΃PB2=lj+%bٺݳ AǾ#iA/r`t"TОw |"?Ğ\c, <L62%H(B4`^"5)%Bt" I5BQ ^3`:73PjIIZKhMV kQRQpܕ R FKZ(ԥLB? B{Z [}< futs\VBCuG (@`=~GȔ Kl5#rpzhR(fĔZ)?AѢ8\v Q=`-iJP6Idlu?_mW3fgC? ,\9Ͽe}3 j߈FMc{yqco:TfG>nʩ=la>OV VNsٖYyC!C4 S'6\-dD+~ubu*`-7кu7Li/su3;hbeon#ѬNp-Ju!o L@LɅ Gn"hV*J <-9R KRRD '+vurÃUҖ!+7Ӄkq;UѨ_dEpZ"(tAvك #ᶽԫ/P7h$K8Ji)*@t}N^Cnj;ǴᡩѪM_ÕVձ$20DOЋLw@N5HwZH6jKdXjYE2X~Mjـ:szEK}@e] C4 Sr;s^[ B6mD@({@->/кu7L 'q>nIt :hcf-yRLv;nvW0Y" Zԧ~&|pQKK+Wwv%Uw=7KKx9}wf?K\E6ҿps]^|uJuć9K6M8 /+H&_٫~WV 0RDT%RTϒƋ&한WZsư Vm;0?^Z+jJo4xPs$ߗC'Rip7^&T>ɮt/ɃFC5IMOm oϫ# !=oLF#'ϖ3uIR%rG~GFW>E0YmʹIӂpQbTeN81TI3f`Rj W)ҲBhlenwOsJsN,X хTkHOgRF2n-N<@jh7~Aq:}fU?+sgUJ㻫4jyxAr_>!gS'v[޺*jռ?녕&^\Z9_ޯ,Wv+Zݾ\?^Im\Xq^ۓ'rmpKj")~!d $:SCg/gRUdT#9 MI LeiJ0ͩ x#53Uph5s:j0̰pPnކ9}KcWS}|%'l<殻=y~oSk{-xp5[5IѷFb;ո>Ov3) R:@wnSU :o6R8(b ^Nׅ劊EOun'mtl.9qcZJ/E;R{O ,7TL2!n"%h]])Hs@3 ; 4!P J?sU*4K -dJ,3& m4Ϙ`Z BTUwww,|]6 :-ꨭ7}qbr^oh4uovQiӒܖ1Z͍"FvבUT3h|vԐ!H'&L6hUfR$ydVM7)/t&HheX$j4wRl{>zw!iN #SVO3]*fǯA/*UR^{ac\k q+1/ ;a\◔[3}t[3ۤ`hEobv[) S4[bn>bZwS$({.w@mԄ.h0t P2%v2!X $) i4sPsW"RF\e,yN3yV0LRLh+=QnXCxKXAEDj2"5*+Y%HϞ|RϮuk'!B?fU:~tKyڇ+qpVSE*pßz1w_\ի۳S BؿN>Iyq\S4>*-?s.W^WZV;8{铫?.?R*xz 9n{)J9ʛOK6'&2^9 $V=e#l5sX3yvz5Ok.w9~mˁj|f)4Lh PRa F9)%gB\)`(0%_5g^v3qat5M}=[g:'.9!=b7vsb?suDP9kQ3SuPuq =to7w_qAnj>巋[O[2{2tv?.]~u?cYdf? "9{v ;DQ!8çwIH<7h+ATA,H/o81U`^I2 -`)ڞ(jCe),*:+R%J`1KApvhal- a6o`k2b v}OcNdX7[c"۴̝.bpQrڴw< HV=v+&iU47ɐ)V|V;#(rեjgɖ[]wufs"#ZzxW 'jr:\ߗIpߗNӨi{%%H:->n=/uЪAiOZ[AӞa|6mXJG͠jaN=Jd¥U]D'>+g6XV&sT1G¾/?3՚H=G;: ctM)xCf#J10P-ט(GyjtFq0rP<}}T7|FӠ/\w)t?M1Q Rj@>ϛTSdFQ02aItMj\(=6 C):-&RTa(8\W7QN/: |}5njRxzb9I,r5ǍҰ9o8 >QdJ5R}R&PZSqP(*&Ra(qF,笾}8JS}ޤZR:FiXNR@iXNԚjjK Pʪ"L PؓwF)XEq-]cB"Qz( >r2 ꯩϱcF) A{:)>oR (=nDuvfb( ˉZS Z̧F)a(e҃G^w}T73Ja9Q.b$P2`"p<{KnE5T( ;=B)!@i mHE~13aKlv$4n[I$s$$&) 0[T*A䯡^Ǚ\Leܑ43֦sqnYJ! 5(SJ-I =`#z.3ȴ((7,H͝$C<+J9ťCa$ߩki@covoY;.SXE%xgﶭD{ @ýrdH`&K-sDS\8gjZ\!VR0?<(%,SwsO)x$UE sZ$ 6֤Vf6MFߔ<̷Raː᫝;$"8~MhҸ/rɝw^,#x@)@]~a+$i+";vj XӸVoZLGM?[uܽM 5kpe`|_׹TZіݤsc̙4 JYN)FD8O)gaZf.G8mbG8NJ9k. TIvCA;U6D0,^bN FiƱ#ʙ s)faв)Mƾŕ`HRĒ{FO|`{|+歟9FG&f:8]^ןU$d1D|P[dSJB2gYjFKiNbX5dLfĊ\~3A7=4( HEЁ}[UO __:UJ Azl~f?hh)H]GA|{y8&U'v:ۛb9;_]mʧz;{\><.#1?KX67 Hr_9#*kr?‹gI'M`U:R/j5RVyW)XO {"o>t ˚1Պ]dxE{}duG`:)6i]$ lv44];{6$hi5I^ }AGyםjy4κ} ֏ưVz!>DKܝ44olT^MUf'֭,öu+ˠܾFO3>˲ZjC^=;hp-D)\rX93˥;&H$lpǛW}W _g3p/J9=XNIj@S#^.>|qSyQ|c]ζtvDYu 7yܜj+Ƥ;W=w?3%kܶ!:]1՜u*ՔOK0HQgwuisV?d7@S\ghE?gi >/H|qUS̀v33(~Fb9}cN텩j>G5sƢ),YCц`č]\$!_FɔC3jXN;h2SI{nɀڭ E4J4Źx[,BD'u9+8nK[hE c$؋|\EVeބwSk<>f>~pj&B='3N/}i!fTWjՠظȐ9Qab޹Wp{1X_p7VЏ4[a.wQj=3|7B%((Ugro[ z}ivvXsBF8Xc7[s=,ŃGΓ`FL>Y(d n^/nˇKLJI.W^3-WOs(f2sI:?_0YsQs9h)2$&.B \OWiN|oДW( `4}_ɘQuX1"1(%Te$a (IU|*xũH̤6ϑBd<,OQBe6XRna;~ڹB毗KA?$`;.q{NGs t  T.Q3lL9FD=O (=M~06!RGօd/`_dBtVXw͍E:)|!RDi[SL\=zgwg!W6XlnO$VO+sHow,<)G1`t*E1oY?hF+WM4yZ>jhUc6s\w[`y?d-7`(w; =cy˰:2cYIj%& Qonϡ-=;?Ӽ[Luc1;<!z͋{GqƗ~N͛J]*3 סb< "9&:: uksG/;z%sfNANYB/w^tRM w: C]2"Ζ)NJšT p@ҶxqNZs52 V22l)=ՄQb}ϽbUJjIfs'bJr7|lwZc'<̭jRNȭ3e|{$B-Y[iًY%_*y }|6|\vZ3aBY9X$MMRs*js,LYt(y~;^,X6[C-d>cMy[eHX5/t>ߞb5fX =˻OH|qf1TS0DTFp~z8" =&-EB8%çar`Z8e͓ԅrHj9ϜRKʭN/Lh G8*bުTrNIY@PG@C X W ~ԅYz1iĚ1F4e2u`9Who%a׸ L"}`<)A{~aYS`RUP"P 5br8̥͘^̰YΝ&(AuBRI)N8)5 +FIVURoWd]gaAK7j$>>@S\MK5M ͮ C=-~w.Xu2BsHkSDb)v!-ST ɜ`QD)%7hC]VtXMkOwп~Lސpc0wݨ} Z`hi.j- Bb"%]RӚʴ OktJs4Iȉu$.9MT~q{Ր{e\Ϙϻ# 傄1wt`b!O..VG"J۪VC/6<cw<ҭrh 맆mu~;WhTH[.'?gU?z?z?z?^'DSnQdZ(Ks&ϵQR ~`FT0*hB\>΅E_d싮F JH/ZBPI.,$Lm#k_+d>eKm@իtcI_C`d ,z{z{`JTrv"\Qe >bgLԉI^سka1e0vO?$ /[a)h\S-9aon]/a G—*}c`? \S&Vg yX̩'.,ls,VG[:iF@@ ъI*:<˳bш,2kjeQYR9P" xAa*` yj }ޡ PD{@?ul%?xjj:u=eV"WXx|BNeafm&FֆQ~ hhݐ%K,$gmsѕ?qQɳ6by:_h81s!ќg|'œ?ߔ檵OD3Bx?TPWzai馹~hy]!)妸~5@cd.}MM43]W\آޗnVύ.0lG݆?&mD_u,.S1edC3!?* \2Xv~l%GG#4F1Q\g4ܪGsV'ئR<]P69NmP#}cbt:Mxޫ ml7w(q%ɚ=Bj𻵿o;+F]Twrsd;6iQ!,H,i~߷.Aɥ~ugbv٬R+_G^# $uvl>o^t}+>q/6 U5އ󓃓ãGGOv{[S|(욊HJſo>ƍwV}.;YeP靃[x.0T 9|Ni_6  -0ؗKWe_]ݟ'~M|^}KN~^]/Jշ}#Av:6(w~o/^lT>ϐmLpX׳ռOˊG1?fZF,\TZeZnkNv{L lz^[|}M PfY, ~@ <oKĦӵ|ؿWٚ]< [7~>ک¨oxy7ݧlSuk{X74?:|^ۺA(JWܪҹYdklݦ17/EK$Ų[_x'yc{989{up{u5rʫ[3աս^ hwzqޞ|t{=X4'G'ova޼jEWGo.wmgYK.I{׊M7̗z_:^ϴ/?mu.;?mݹm:^+g ?Ԓy{s+mtɝ}Y ӌߙdQZ"}b{LJ`wQo;mFo[`F Gk5[I·tChUܣwwޙoAG=3ukfo+'jkK½}NjN{C}uTaxbw;%ޜ#o)l܆Mpz^3 d|~|g@6>N0;J<8vPݞ[@`(*5pK~1~u#1]ޝv_.ZgxiJv&o; U ^6lb ۀsp|8'&HD!(,&f!GQ"c-c:B$fXq *_ s`1޴aXt:&[|UtY|.T2b$# H$(,f¶VQ YaQBhLq.DZxŲ+&k V!? 3Tl.d%EDPH B1E! d(0D LJXLC "P|gjrYdN.S\HV.rƹ%" qX(6@18@]@ab-&SsDDf +Ef1 &Z0Dacw֘ XsM2ud>Z瘐y4Wj>mAhAtOJ;xsaJ Jb*Kw멕]l 7cu13}߷77[}K![ѥ!{o"54}Z!Ш0\?5Ag5;}jL'{ɍ2jX@}\1vu՚b& u8MnV?!Fy2X>V@[èDbOd;*S 1@"%wI]^9!FbŁљ#EY (k ;r0⊉I%yNs#;1sxKSy^ ^ifA :0(#v#F~bw zZ =A&5YcZZ(stP3ĵK;mXSK[J@#eꬡs&h4MVʜb{0n% nwn0ztEby.vS)u{e>Ц.vk˷V%6c缎L88LPz32vqtK`u{Z!mؚ)H͔}~Ę sol )me3$0)3XEh)b㍅I8z[YNgn3%˖n7[h)EnSaZ )7AŁ }FFG4.[ZwnUp΢%xrV9.1{`TXTI634&KVNIV+H ++]?X\i1HJ#g%"<`hiLjMT\)f-.vבٕ[bbDF=M~j-EBJ>x0 ;u8a= ύncӨ{}Wca[eY8hz]^۽Mb}h7Eoƿ>JVPRRRk43aa"- R;Ŗusx TjU{<΍l+ftkU]9Uwprk.0ABzmzxRT3&kѴƻr[niD{؍>Huyua7E5+L^坝+l0rn$믤ssuT\sӝ~\rU]U 4߇{Eՠ"D]+UX "8"1K)34;8鴛^vRP(Ao؝9(Wc&=f0OD$~l _}LmyEJX~bv\ZFl'QO}!v},lφh.9G#G4 NI jg%97$+r=sDWxq+f]EϜ8\i/RIPM] NR8 f &FDh&aCdB1Q l <~(u! <цE( &bqH̤9' mZZoRg p]b[^s2WI <#zgN[\Lά)@60ضԶ!%09du&a4ƹm]ILNYޙ,.jJDt$v2u4$ٖMJfR9GH_wFn`FT%T)O]j&5nll{hP6Z\=!.)elɲmyFFtަn! p<"%t0ɍ_?Uibt,~=Ч:G}/Cm]-Q.?5 5`UKWXDayMO9Z(47ڬ+/?YGn{ۢN$kyӁ L@LsT>Ymꋜy Uk+gj;](8u8ڭעݚE93[V|fdoBuFYf;ap…PW]Q܊[ SrH)-26Ltr >5uO=z7Mp?w'JkP.ʃbԭJ1jF8XD%׎=w] nx@ki?[\m$:*X|. t4Q_&ޭf萣 9̀PX…4R>G *:6C4b1Q iG G[RD0o Mmq9Q9s 4;E@kE6`%!ÀkzB-Yn,^T^@9-@K(G)q4[6(^ T*tvV_En$ [l$l. YD$Q:J-"-fDgG*N*K J+B6R9BXFth63*uwG@+7#/:^#B7_ӌ] *͔h#-\wBt)'LX w8{th2'ZXXJ! h RӺl6}mN D!5ڽ`0d1o^w&/)Lq`v}1&0S[*ߗW_D=qF~L)cbUrxd"p=Ql%=V8SK~X!9,#Sh2C B=",g#hC̏j7K\.Wb̔+H, =d'xG7p:]mٜ'7]ML)34K^&N1ξ@03loj^}fR:əQAbnLCMaSEWR{`}=O_۲``ͯ&)1xP[3S#9\n{_q:h@;ZIsѵJ܇v9 u35 :pO<#"qJU iqУhP_CluF]rx#ֻl}*6}T.t3+pq'A&u tn?pJۛ2E<[ۛIr%)€gCà Yr Nv̾PLiN@G B'$0 0"i&&kKuى;xt\/'lEҪ@,C. L5@ sE욛ofB7@MQuhBD)s77Л4;=FV"h[caJ ~WŭC?zDT2Km=b9w)q|dyhI R%) ׆CSM(Od 'KCԃ(ٶ Mh^Мj?| X`h΋ۼrGnzy?тY%(*R"cLξ/nZ ցzc0e; =n54z*r_.V1P 'Ż]kyNW6EGNlp:a!F3j# ?ѓuZ !rʹc|d ݴ ױ/:)'ϚW[=\ߔA+RDdTٛ=5-#d,SÈIeN<_B=8 1hC -01tvG*&ncDE~@< :@^gZ|$gweB׺RG.h 'hB"Aͨ)JQ`|vA\e3Cᖘ,M}iًrA x|T$D$5~D`,9xgn# K\}.#|0 l q?sAjGK&Zɗn#6Ak "!@PN8sBȱyc%3JZwѶ mF'D9pȘK-.ʘ0w_PXaWfv\mmO|zbBN2'O]0@&К2`Vëd{ 7wgL4U(*x* h\)Q`:]v%6+8G֍ݹ튌IgEp1}P4f69%_)'SNcsr`WA|&(&#QIxFV"e|MN?MϞf>G=75ڣ~g=7ԅ_}hAݷoǏxw{ - )e>D|tUeSń~ yd=9nĄIdTl0OG{B5"]91js{OSoaV&)99o7OGԽ/? @?̯Jq+Rgýgab—/uAΧKkFh%%J|B[=-qR٧W.-n5F2=.UG :fwOj~;^oC]hi]p7J 5ZJ\vdQoV2[uw/˿v}KZ G @ %Ӷ$_~F`䗱GoH-kSl)Z4A[TML RbF JDctl۸K;zkaqQ;;`C^f`OdP~iikZQ=^ji={0#-2O`r϶ه% BD PC.L`t=}ʥ ;D=0_u}ٷ,CRlQ6:MMfc1$_\ XvzŇB @wcͩrLᄡTN@Nd}ԙk;vk_@m ZJ$atfEG'F)J iHPv6דQ~=$*څN[gB Jw~:%Zmt_8ĦH<@z4\k;S~k'CϤ~)O:U rU4*@a&l?Pɢ{Z6Fa/qg ЭS˗[@K{ [x;Z9WP.0tT Eޫ$`wCO/YS>+-!dH`3vh$@6 |"ԥ?tڽ$V~59.t ɌY,NoX:}\g~ߚ"|^5 &ul(I"H;ud8GN|4k.:؞d 5 gpa1ˋ.%+I.ȺB^PtR)KrXo]wZWOACe>(m!GvJ4:Ԇ5T] l5s7sSs*7@JtS^^ozWSU,׻v-]oo-wQ~ J- oJ^Ж?YMZ@1ŋtQĽ 7QH -曔C Hw~z b;Z.93Vz&KHOn1Q RR<v_g E^#;vT+%=J|"4 %|/Mnfg 6J\ԅhCBb,93jw5e `Yu>{>j7^ϪY}on=+]?Feji*Wp'pͨ{:եr\rI+w&뒋IZQr%tP4$}uWL3u؝h$ dnY,kP:(i4j( nت)=D|un3[تvW=j%~t͢J{>j'bʼn:Y&4s#M*GDmj8l#^7Vfi#5b6CQPðo7WJvߝ&2y:@'+ޡ4l]FvI÷ZJv "N />^ۻ3H-Ha R8NWdv>=J^G< h1[I=ktmpd*t7hsK*6ry}=e[Qfk{1XٳHf##r;\ I6LT$HU&%KL¥ZL?x]B`p_p?u>{>jGPtBa{'JVo{aQZ[)({R~߉nd$f[lt[ `>9t:ViSsYSC}UG&'vڥ00 0T\c@y,:ys)㙚MSkip״c޹~sЁ7۳B/Q dУTCYpca*g@  P!ı6i2CqiItV1mi$G F+q/-4{0;I ?\w*\Rf]v-tVb"B75LTi9p`g]a߹+e*4J4IE%iO,"aZxG*It( 6% b4cYi [qa b5ՐBOHx؇G}|ʨaЏ}A)u>{>j >8j'|Ho"lOAK v@^MA:z } Q-O=_]|{e.nZ4UiDj^jEG.Q|4}f_rAD=^^?[ 7<݉g;t[E ;\WW&Kh/&CQXb1wIR5w9 b ] }ClxҮaHJ׮3^끴k[hvزl<9o`^Ξ1 Gyj{̿d/8 qRކg̉S4?q p%9???%bMr&߀kOr\J&/\lDr~ذZ+1=ĄKftJMɈh'8B6ܨ NP. w xIO<  EJIK\Lc]cz%0GH҉B-sZZE|;Mդ /x=x;1_c |rL7lJ0B8JE 32ld3-[%R'Nlʅa5[K)2e2,Aik ZB1, 33l0RbR , q\*92}wſI|t (aYafʄdpy4`Oz o|K={eEu޿{dd:ptuхOq/AM~v_f VD1?B!?&xM*|$zB+zLKN`jqj;ܣ(%R5;4ɊzeCiY0DsJ=A`V@,\ z92y83U,;/O ,J{ua׺ޏnͱvy*˦{/1X+77YNx- n',q%?F]O f6u⹿9Sjy9h ~8U_}|6]|a;,? o FHNR$7Jp%m]pYJIζyG.o[1Q)O*?-uE>ơNWpş8t {-CC'ќhWfP͌Xj22 Wvm}\p^-g &/)O ng6ytL*.,N!̮ˣU]lO}8}j,·2/"D"ЕB)~gkPSo^ISuړDj:TDO*H&xFY~s]2bx,6.\E`Ճ*ՄS& zWV۹JZI 4JM4Ni)M42E 1`x6+$RTVTc[ۨPJ鴷b9 !W 4һW'>GC oŏDnzPZSN(}(q^"۩Pթu3G) D) l٘n^bIc De' N\Pltգ+C02e#piZh3_~[?SFQ#6%eb9l2]mwluzVa{ ܓX^~,RudWTSqtǪ?cϊo)%qD09bR H٘.\a[HH:WĸQ*tziR*5gt#DR;Cpɐh-k,՟q6ows78pDvGlpU1zfi;AC 2ɺ%Q)P`bSj:{OIVfh@_C6/(Ai Q~ cP߁V2cA9vZPu0qjPMfJ=f lI@I9WM¡sRǿ dCG/wvnʳ$ήm<9B`by{^ixY;_>1Ono_ҙLFL%ONcbMgcosA.Wz~_xrӒ('A7(L1v:yE16-!Љ}Fv@; 2U5 nM hS,Sgop"dՃ*՘Sf9: T{y#ŽۨT&HL7Jy`zD@6U aYyS9TT OYUGRPSM?%lqDkGJps$)L'8$3')O Uk*IGZJ;j) ;[%SH4*s6Ŗ("P ARNA&T#bb{r

R|(#k,a,C&LkR,2 F"gMV $2ZmQX@yJ#KhWv>ZPD['g*nMF؏f )g414$Ӓ$* "2FU[Ł]נ "#FqhTxʕUZjlCt7 GVI| vhx.HOj]+`ӁEJk@Gݵ*p[? ̼$U}Vԋ. Kj?2F\r=t0Rf澶W(ևP^p% x:T ϐdBcx9d:pci\5Ti55v\9HWe}7ZG--zs#GfJݽײύ%G+ׅYy庠*=>u)gB+B֥F m֒(f |6sBbFF()%֍?weA|ݺ#PxZ 13941ms (10:11:56.120) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[77628630]: [13.941864192s] [13.941864192s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.120377 4984 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: E0130 10:11:56.120880 4984 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121015 4984 trace.go:236] Trace[623288102]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:44.539) (total time: 11581ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[623288102]: ---"Objects listed" error: 11581ms (10:11:56.120) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[623288102]: [11.581915432s] [11.581915432s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121050 4984 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121448 4984 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121467 4984 trace.go:236] Trace[1129813090]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:43.183) (total time: 12937ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[1129813090]: ---"Objects listed" error: 12937ms (10:11:56.121) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[1129813090]: [12.937939174s] [12.937939174s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.121492 4984 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.122708 4984 trace.go:236] Trace[254242093]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 10:11:44.222) (total time: 11900ms): Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[254242093]: ---"Objects listed" error: 11900ms (10:11:56.122) Jan 30 10:11:56 crc kubenswrapper[4984]: Trace[254242093]: [11.900513676s] [11.900513676s] END Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.122735 4984 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.132112 4984 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.158660 4984 csr.go:261] certificate signing request csr-s96ss is approved, waiting to be issued Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.173491 4984 csr.go:257] certificate signing request csr-s96ss is issued Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.883001 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 10:11:56 crc kubenswrapper[4984]: I0130 10:11:56.901986 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.020428 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.020988 4984 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.021061 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.023302 4984 apiserver.go:52] "Watching apiserver" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.026400 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.029616 4984 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030123 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-xrm2v","openshift-dns/node-resolver-6tdgl","openshift-machine-config-operator/machine-config-daemon-m4gnh","openshift-multus/multus-bnkpj","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-etcd/etcd-crc","openshift-multus/multus-additional-cni-plugins-5vcbf","openshift-multus/network-metrics-daemon-sdmkd","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030482 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030656 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030892 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.030966 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031020 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031036 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031074 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031417 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031453 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031535 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031694 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031776 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031829 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.031848 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.031868 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.032993 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.033361 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.033596 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.034670 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.034956 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.035640 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.035725 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.035657 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.036785 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.036975 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037044 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037175 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037288 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.037617 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.038809 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.039274 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.039321 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.040169 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.040337 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.041318 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.046387 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.047704 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.047730 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.047829 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048081 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048100 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048195 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048226 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048404 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048422 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048710 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.048816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.049299 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.062221 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.073722 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.082539 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.091434 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:14:44.290899806 +0000 UTC Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.092422 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.101503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.117380 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.124474 4984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.126945 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.126971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.126992 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127008 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127026 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127043 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127062 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127079 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127094 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127110 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127148 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127164 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127179 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127230 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127266 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127280 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127295 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127310 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127339 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127357 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127372 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127403 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127419 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127438 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127452 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127469 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127458 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127484 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127652 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127692 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127717 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127758 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127755 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127803 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127871 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127904 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127929 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127955 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127983 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.127992 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128000 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128015 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128039 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128065 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128090 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128090 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128116 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128143 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128189 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128184 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128196 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128285 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128318 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128375 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128426 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128475 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128561 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128597 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128646 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128681 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128716 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128750 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128770 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128784 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128816 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128849 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128884 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128916 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128948 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.128985 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129018 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129050 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129084 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129118 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129153 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129226 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129045 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129080 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.129272 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.629224435 +0000 UTC m=+22.195528299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132313 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132362 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132383 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132401 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132437 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132453 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132469 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132484 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132507 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132533 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132552 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132569 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132585 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132603 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132619 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132635 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132656 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132672 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132695 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132718 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132740 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132761 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132785 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132807 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132854 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132877 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132899 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132921 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132945 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132966 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132981 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.132988 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133054 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133077 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133101 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133123 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133145 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133161 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133178 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133231 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133398 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133419 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133663 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133745 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133984 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129293 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129298 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129491 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129530 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134137 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129648 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129655 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129757 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129764 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134176 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129849 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.129988 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130305 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130474 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130508 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130583 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130599 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130616 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130654 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130820 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130871 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.130916 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131044 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131055 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131066 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131388 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.131391 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134397 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.133197 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134506 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134553 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134591 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134629 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134669 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134704 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134739 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134770 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134802 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134835 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134868 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134899 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136377 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136435 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136472 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137405 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137457 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137508 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137560 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134591 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134666 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134813 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134847 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.134977 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135017 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135241 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135608 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.135962 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136328 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136697 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137056 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137186 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137466 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.136922 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138053 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138279 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138922 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.138935 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139061 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139414 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139447 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139463 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139742 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139740 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.139891 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140133 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140186 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140374 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140435 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140514 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.137613 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140898 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140991 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141091 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141202 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141315 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141591 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141698 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141794 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141898 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142043 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142139 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142214 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142302 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142382 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142454 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142523 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142597 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142678 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142771 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142865 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142863 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142943 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143079 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143100 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143122 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143142 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143158 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143174 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143190 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143214 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143230 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143260 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143276 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143292 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143310 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143329 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143356 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143374 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143390 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143438 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143454 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143470 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143488 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143505 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143527 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143544 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143559 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143574 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143592 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143624 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143675 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143691 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143707 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143723 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143739 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.143756 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145551 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145583 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145601 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145621 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145637 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145653 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145670 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145727 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145752 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8r6\" (UniqueName: \"kubernetes.io/projected/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-kube-api-access-xp8r6\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145773 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145798 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145816 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-cni-binary-copy\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145833 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-daemon-config\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145881 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145897 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145914 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-system-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145930 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-os-release\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbclc\" (UniqueName: \"kubernetes.io/projected/0c5bace6-b520-4c9e-be10-a66fea4f9130-kube-api-access-gbclc\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.145989 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146004 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146026 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146061 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwz9\" (UniqueName: \"kubernetes.io/projected/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-kube-api-access-qhwz9\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146076 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146095 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146109 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-cnibin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146124 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-multus-certs\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.146138 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.140973 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141051 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141070 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141108 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.141811 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142010 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.142639 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149113 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149156 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-k8s-cni-cncf-io\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149187 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149206 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-bin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149222 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-multus\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149247 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149366 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqknm\" (UniqueName: \"kubernetes.io/projected/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-kube-api-access-mqknm\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149383 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-os-release\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149401 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149419 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149436 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c1bd910-b683-42bf-966f-51a04ac18bd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149451 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149476 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149495 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149493 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149535 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cnibin\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149574 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149590 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149394 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.150334 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.150978 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.150971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.151552 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.151731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.152353 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.152905 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.152924 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153101 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153164 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153528 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153522 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156151 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155786 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.153994 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.154092 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.154316 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155137 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155414 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.149607 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156693 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-conf-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156903 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.155776 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156034 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156300 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.156496 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157387 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157654 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157890 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.157933 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159032 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-etc-kubernetes\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159164 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159277 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159596 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.159801 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160181 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160202 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160185 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160279 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160304 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160336 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160360 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-hosts-file\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160379 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-system-cni-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160399 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-binary-copy\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160420 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160440 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160460 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c1bd910-b683-42bf-966f-51a04ac18bd2-rootfs\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160479 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c1bd910-b683-42bf-966f-51a04ac18bd2-proxy-tls\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160498 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160519 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160539 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160559 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160575 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmq8s\" (UniqueName: \"kubernetes.io/projected/6c1bd910-b683-42bf-966f-51a04ac18bd2-kube-api-access-rmq8s\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160594 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-socket-dir-parent\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160616 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-netns\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160633 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-kubelet\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160661 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-hostroot\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160678 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.160850 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.161033 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.161091 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.161376 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.161699 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.661676941 +0000 UTC m=+22.227980965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.161922 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.162015 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.662004288 +0000 UTC m=+22.228308112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.162554 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.162918 4984 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163551 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163648 4984 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163733 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163776 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163792 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163807 4984 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163821 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163834 4984 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163846 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163858 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163870 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163883 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163893 4984 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163905 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163918 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163930 4984 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163941 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163953 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163965 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163976 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.163989 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164000 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164012 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164023 4984 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164036 4984 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164048 4984 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164060 4984 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164073 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164083 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164095 4984 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164106 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164117 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164128 4984 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164139 4984 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164149 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164159 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164168 4984 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164177 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164186 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164195 4984 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164450 4984 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164471 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.164870 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.165863 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.165931 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.172842 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.174409 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 10:06:56 +0000 UTC, rotation deadline is 2026-10-29 04:10:36.873082417 +0000 UTC Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.174760 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175129 4984 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175151 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175169 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175186 4984 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175202 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.168715 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.172368 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175219 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175323 4984 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175339 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175354 4984 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175368 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175171 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175382 4984 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175397 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175410 4984 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175424 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.173087 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175443 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.675425609 +0000 UTC m=+22.241729433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175099 4984 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6521h58m39.697999466s for next certificate rotation Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175468 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175487 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175497 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175507 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175517 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175528 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175538 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175549 4984 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175558 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175566 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175576 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175586 4984 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175595 4984 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175604 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175613 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175623 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175631 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175641 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175650 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175659 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175672 4984 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175684 4984 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175696 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175707 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175716 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175724 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175735 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175745 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175756 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175769 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175781 4984 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175794 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175808 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175822 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175831 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175840 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175850 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175859 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175870 4984 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175882 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175894 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175906 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175919 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175931 4984 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175943 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175955 4984 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175966 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175978 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.175993 4984 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176005 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176019 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176031 4984 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176044 4984 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176056 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176069 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176081 4984 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176092 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176104 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176116 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176128 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176140 4984 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176152 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176163 4984 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176174 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176186 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176198 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176209 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.175837 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176222 4984 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176238 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.176265 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.176285 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.176345 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.676328639 +0000 UTC m=+22.242632673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.176269 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.177120 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.183468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.183899 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.184214 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.184552 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.184991 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185110 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185706 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185890 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.185931 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186660 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186117 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.186672 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.187021 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188686 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188686 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188719 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188829 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188849 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.188980 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193216 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193415 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193710 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.193866 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.194188 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.195206 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.196764 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.196810 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.196827 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197032 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197050 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197216 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197270 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.197753 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.199935 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.200321 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.200630 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.200968 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201000 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201159 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201162 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201447 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201625 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.201681 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202232 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202305 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202434 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202535 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.202989 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203132 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203198 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203456 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203556 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203796 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.203854 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.204132 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.204736 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.204748 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.205435 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.205574 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.206112 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.207648 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" exitCode=255 Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.207782 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5"} Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.207902 4984 scope.go:117] "RemoveContainer" containerID="ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.208037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.208736 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.208954 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.210995 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.215043 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"message\\\":\\\"W0130 10:11:39.159607 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 10:11:39.159917 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769767899 cert, and key in /tmp/serving-cert-457381567/serving-signer.crt, /tmp/serving-cert-457381567/serving-signer.key\\\\nI0130 10:11:39.322438 1 observer_polling.go:159] Starting file observer\\\\nW0130 10:11:39.325527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 10:11:39.325676 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:39.326402 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-457381567/tls.crt::/tmp/serving-cert-457381567/tls.key\\\\\\\"\\\\nF0130 10:11:39.604385 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.226632 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.226865 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.228202 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.233859 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.244127 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.256704 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.269061 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277192 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277234 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277283 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cnibin\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277305 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277326 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277346 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277365 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-conf-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277387 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-etc-kubernetes\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277396 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cnibin\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277417 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-binary-copy\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277491 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277516 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277540 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-hosts-file\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277559 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-system-cni-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277580 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277600 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277625 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c1bd910-b683-42bf-966f-51a04ac18bd2-rootfs\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277643 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c1bd910-b683-42bf-966f-51a04ac18bd2-proxy-tls\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277676 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277699 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277734 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277755 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277775 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmq8s\" (UniqueName: \"kubernetes.io/projected/6c1bd910-b683-42bf-966f-51a04ac18bd2-kube-api-access-rmq8s\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-socket-dir-parent\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277827 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-netns\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-kubelet\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277867 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-hostroot\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277877 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277889 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277347 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277919 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277951 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278016 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-etc-kubernetes\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278031 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-hosts-file\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278050 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-binary-copy\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278062 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-conf-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.277934 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8r6\" (UniqueName: \"kubernetes.io/projected/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-kube-api-access-xp8r6\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278108 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-system-cni-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278107 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6c1bd910-b683-42bf-966f-51a04ac18bd2-rootfs\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278128 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-kubelet\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278147 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-netns\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278149 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-hostroot\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278170 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278176 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278197 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278223 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-cni-binary-copy\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278242 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278248 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278283 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-daemon-config\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.278278 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278345 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.278464 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:57.778408572 +0000 UTC m=+22.344712416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278603 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-socket-dir-parent\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278682 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278705 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278775 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-system-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278814 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-os-release\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278819 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbclc\" (UniqueName: \"kubernetes.io/projected/0c5bace6-b520-4c9e-be10-a66fea4f9130-kube-api-access-gbclc\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278861 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278875 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-system-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278880 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-os-release\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278886 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278915 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwz9\" (UniqueName: \"kubernetes.io/projected/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-kube-api-access-qhwz9\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278961 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.278983 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-cnibin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279030 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-multus-certs\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279054 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279093 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-cni-dir\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279124 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-multus-certs\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279153 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-cnibin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279164 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279055 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-multus-daemon-config\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279464 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c5bace6-b520-4c9e-be10-a66fea4f9130-cni-binary-copy\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279573 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279838 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279899 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-k8s-cni-cncf-io\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-bin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279965 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-multus\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.279989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqknm\" (UniqueName: \"kubernetes.io/projected/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-kube-api-access-mqknm\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280012 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c1bd910-b683-42bf-966f-51a04ac18bd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280032 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-os-release\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280055 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280075 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280155 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280170 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280183 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280197 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280209 4984 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280222 4984 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280235 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280275 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280290 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280302 4984 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280315 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280328 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280340 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280353 4984 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280364 4984 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280377 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280391 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280403 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280417 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280431 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280446 4984 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280476 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280493 4984 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280510 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280527 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280544 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280556 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280616 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280617 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-multus\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280662 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-var-lib-cni-bin\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280664 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c5bace6-b520-4c9e-be10-a66fea4f9130-host-run-k8s-cni-cncf-io\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280841 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.280993 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-os-release\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281278 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281485 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281677 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281719 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c1bd910-b683-42bf-966f-51a04ac18bd2-proxy-tls\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.281867 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c1bd910-b683-42bf-966f-51a04ac18bd2-mcd-auth-proxy-config\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282061 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282082 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282097 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282135 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282864 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282881 4984 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282893 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282907 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282919 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282931 4984 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282975 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.282988 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283000 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283033 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283048 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283061 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283102 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283115 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283127 4984 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283139 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283151 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283165 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283184 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283197 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283217 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283228 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283240 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283267 4984 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283279 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283291 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283304 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283316 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283328 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283339 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283351 4984 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283363 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283376 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283388 4984 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283401 4984 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283414 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283426 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.283438 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.309902 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwz9\" (UniqueName: \"kubernetes.io/projected/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-kube-api-access-qhwz9\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.310127 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.311281 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8r6\" (UniqueName: \"kubernetes.io/projected/007eb083-e87a-44f4-ab1b-7ad0ef8c8c19-kube-api-access-xp8r6\") pod \"multus-additional-cni-plugins-5vcbf\" (UID: \"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\") " pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.314030 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"ovnkube-node-xrm2v\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.314885 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqknm\" (UniqueName: \"kubernetes.io/projected/5a9a337d-bc6b-4a98-8abc-7569fa4fa312-kube-api-access-mqknm\") pod \"node-resolver-6tdgl\" (UID: \"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\") " pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.315357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmq8s\" (UniqueName: \"kubernetes.io/projected/6c1bd910-b683-42bf-966f-51a04ac18bd2-kube-api-access-rmq8s\") pod \"machine-config-daemon-m4gnh\" (UID: \"6c1bd910-b683-42bf-966f-51a04ac18bd2\") " pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.323030 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbclc\" (UniqueName: \"kubernetes.io/projected/0c5bace6-b520-4c9e-be10-a66fea4f9130-kube-api-access-gbclc\") pod \"multus-bnkpj\" (UID: \"0c5bace6-b520-4c9e-be10-a66fea4f9130\") " pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.328563 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.346071 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.351581 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.361482 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.367470 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.374516 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.374915 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6tdgl" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.384622 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.390187 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.393191 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.395883 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bnkpj" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.403342 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.403924 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.415041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.426218 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.437390 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"message\\\":\\\"W0130 10:11:39.159607 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 10:11:39.159917 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769767899 cert, and key in /tmp/serving-cert-457381567/serving-signer.crt, /tmp/serving-cert-457381567/serving-signer.key\\\\nI0130 10:11:39.322438 1 observer_polling.go:159] Starting file observer\\\\nW0130 10:11:39.325527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 10:11:39.325676 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:39.326402 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-457381567/tls.crt::/tmp/serving-cert-457381567/tls.key\\\\\\\"\\\\nF0130 10:11:39.604385 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.446618 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.466217 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: W0130 10:11:57.475692 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3 WatchSource:0}: Error finding container acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3: Status 404 returned error can't find the container with id acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3 Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.476139 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.503205 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692377 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692520 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.692499174 +0000 UTC m=+23.258802998 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692566 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692605 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.692712 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692835 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692849 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692859 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.692888 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.692880812 +0000 UTC m=+23.259184636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693155 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693188 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.693181599 +0000 UTC m=+23.259485423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693230 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693268 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.693244591 +0000 UTC m=+23.259548415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693309 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693319 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693328 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.693348 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.693342493 +0000 UTC m=+23.259646317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:57 crc kubenswrapper[4984]: I0130 10:11:57.793105 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.793276 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:57 crc kubenswrapper[4984]: E0130 10:11:57.793315 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:11:58.793301649 +0000 UTC m=+23.359605473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.091845 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:55:14.039278981 +0000 UTC Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.093695 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.094698 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.095927 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.097028 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.098073 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.098622 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.099219 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.100204 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.100864 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.101860 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.104218 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.105629 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.106227 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.106887 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.107980 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.108585 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.110409 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.110919 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.111545 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.112709 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.113272 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.114501 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.115102 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.116594 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.117418 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.118394 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.119610 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.120328 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.121197 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.121904 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.124767 4984 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.124898 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.126761 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.127929 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.128482 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.130288 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.131186 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.132284 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.132985 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.134385 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.134939 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.135968 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.136993 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.137657 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.138150 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.139126 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.140043 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.140780 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.141249 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.142130 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.142759 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.143674 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.144216 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.144687 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.215746 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.216170 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b8af47c59e6471c72a764fc1bb679b40a49a6f1be1b7952c88cac70cfff472b8"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.218369 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.218399 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.218411 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"35176f1db40bff30d0a1bdf1cd2412184637b463c4c1804bec5920259c1dd128"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.226228 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.226313 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"591abe0e3314df0de6d6cfd6cbf735143e6b6adce321fcc920f5a2e00918538a"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.228688 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"acf4743f834d35a85f409c817390187f0e64fdcdc199929f0fafa5eaca58dad3"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.234282 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.234365 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"36892c45faeadfd18f700d0f05aa81c0a865b412a10a9b7263b4cedcfd049de7"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.234852 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee2759a51b643b4c7daf9a95009f2a04660bba3e741517cb38b5eff164d77078\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"message\\\":\\\"W0130 10:11:39.159607 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 10:11:39.159917 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769767899 cert, and key in /tmp/serving-cert-457381567/serving-signer.crt, /tmp/serving-cert-457381567/serving-signer.key\\\\nI0130 10:11:39.322438 1 observer_polling.go:159] Starting file observer\\\\nW0130 10:11:39.325527 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 10:11:39.325676 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:39.326402 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-457381567/tls.crt::/tmp/serving-cert-457381567/tls.key\\\\\\\"\\\\nF0130 10:11:39.604385 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.236451 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6tdgl" event={"ID":"5a9a337d-bc6b-4a98-8abc-7569fa4fa312","Type":"ContainerStarted","Data":"d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.236488 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6tdgl" event={"ID":"5a9a337d-bc6b-4a98-8abc-7569fa4fa312","Type":"ContainerStarted","Data":"4603b4dde61408a7dfeb79ea5137d1d31be5f5f0d3e704bb3e4679acf2a7cf15"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.238490 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" exitCode=0 Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.238627 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.238713 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"9a5c5f0c87eb230fd06c2a946e269e2d2a3860384327e26e9cd419f72e754050"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.242043 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.244424 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.244708 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.245864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.245918 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.245932 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"931619a47687c757bb2f44f8c147193f2613873a6f45d51204f84236421e0391"} Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.249295 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.255764 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.263692 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.273316 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.282561 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.292828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.309946 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.333218 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.377663 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.400504 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.413044 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.423427 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.432465 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.443880 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.455443 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.471612 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.481498 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.489530 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.497883 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.507699 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.523037 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.541608 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.553405 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.561122 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.568335 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.579551 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.591506 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702443 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702561 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702607 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.702581205 +0000 UTC m=+25.268885099 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702641 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702654 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702695 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.702678878 +0000 UTC m=+25.268982802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702733 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.702797 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702837 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702921 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.702901583 +0000 UTC m=+25.269205467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702921 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702940 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702964 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702968 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702979 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.702988 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.703024 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.703013655 +0000 UTC m=+25.269317569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.703052 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.703032615 +0000 UTC m=+25.269336499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:11:58 crc kubenswrapper[4984]: I0130 10:11:58.804114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.804341 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:58 crc kubenswrapper[4984]: E0130 10:11:58.804459 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:00.804432333 +0000 UTC m=+25.370736158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.022511 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091657 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091873 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.091968 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091656 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091977 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:32:44.56824892 +0000 UTC Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.091689 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.092126 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.092181 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.092337 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.251412 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c" exitCode=0 Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.251509 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255385 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255449 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255465 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255496 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.255864 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:11:59 crc kubenswrapper[4984]: E0130 10:11:59.256023 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.277106 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.302086 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.340870 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.358043 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.393843 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.411846 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.435206 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.454454 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.464180 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.476867 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.494298 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.505950 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.523355 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.544287 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.790899 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l5dvh"] Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.791659 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.795134 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.795414 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.796158 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.796613 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.808917 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.825145 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.840325 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.859918 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.885365 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.909154 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.914660 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7x4j\" (UniqueName: \"kubernetes.io/projected/a73d7427-d84d-469a-8a34-e32bcd26e1e7-kube-api-access-g7x4j\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.914735 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a73d7427-d84d-469a-8a34-e32bcd26e1e7-host\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.914778 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a73d7427-d84d-469a-8a34-e32bcd26e1e7-serviceca\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.927182 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.936605 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.946509 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.955041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.972708 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.985137 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:11:59 crc kubenswrapper[4984]: I0130 10:11:59.996037 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:11:59Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.011239 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.015962 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a73d7427-d84d-469a-8a34-e32bcd26e1e7-host\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.016033 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a73d7427-d84d-469a-8a34-e32bcd26e1e7-serviceca\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.016068 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7x4j\" (UniqueName: \"kubernetes.io/projected/a73d7427-d84d-469a-8a34-e32bcd26e1e7-kube-api-access-g7x4j\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.016064 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a73d7427-d84d-469a-8a34-e32bcd26e1e7-host\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.017211 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a73d7427-d84d-469a-8a34-e32bcd26e1e7-serviceca\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.028035 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.035952 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7x4j\" (UniqueName: \"kubernetes.io/projected/a73d7427-d84d-469a-8a34-e32bcd26e1e7-kube-api-access-g7x4j\") pod \"node-ca-l5dvh\" (UID: \"a73d7427-d84d-469a-8a34-e32bcd26e1e7\") " pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.092477 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:31:54.589871159 +0000 UTC Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.106465 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l5dvh" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.259991 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980"} Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.262798 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l5dvh" event={"ID":"a73d7427-d84d-469a-8a34-e32bcd26e1e7","Type":"ContainerStarted","Data":"0451172309e5191c483641fb9d175a3fcfa50a22b1c3991d762503b2c3e42d0a"} Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.278175 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.294747 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.305845 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.315888 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.329108 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.341289 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.355156 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.368500 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.378990 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.390424 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.404335 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.415582 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.427151 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.440583 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.451014 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:00Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.721949 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722060 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722098 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722121 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.722153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722318 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722366 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722380 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722320 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722280353 +0000 UTC m=+29.288584207 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722463 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722515 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722341 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722536 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722475 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722454007 +0000 UTC m=+29.288757841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722408 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722643 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.72261634 +0000 UTC m=+29.288920204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722673 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722658351 +0000 UTC m=+29.288962215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.722704 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.722692462 +0000 UTC m=+29.288996316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: I0130 10:12:00.823439 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.823549 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:00 crc kubenswrapper[4984]: E0130 10:12:00.823616 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:04.823599129 +0000 UTC m=+29.389902953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089561 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089569 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089687 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089584 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.089565 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089750 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089832 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:01 crc kubenswrapper[4984]: E0130 10:12:01.089926 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.092849 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:15:23.754085304 +0000 UTC Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.274509 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc"} Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.279407 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980" exitCode=0 Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.280222 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980"} Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.282169 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l5dvh" event={"ID":"a73d7427-d84d-469a-8a34-e32bcd26e1e7","Type":"ContainerStarted","Data":"1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad"} Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.296194 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.323641 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.344757 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.362198 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.380092 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.396480 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.415503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.433025 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.446133 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.459846 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.479679 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.506380 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.529614 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.543431 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.556108 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.567873 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.581162 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.593084 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.603806 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.624529 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.643943 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.657062 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.668810 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.678023 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.692839 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.702628 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.715704 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.729053 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.740432 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:01 crc kubenswrapper[4984]: I0130 10:12:01.754220 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:01Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.092917 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:18:11.003142963 +0000 UTC Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.192675 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.204562 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.216613 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.219584 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.247073 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.264881 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.277077 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.287820 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68" exitCode=0 Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.287910 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.295008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.297404 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.302521 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.321680 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.343634 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.367318 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.384019 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.398706 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.425044 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.451831 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.472902 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.489551 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.507495 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.521936 4984 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.527922 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528485 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.528626 4984 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.537324 4984 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.537550 4984 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538731 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.538767 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.547837 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.557905 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561388 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561405 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.561447 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.563169 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.573059 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577288 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577332 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.577351 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.581877 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.596388 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.597078 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600684 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.600776 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.613659 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.617340 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625627 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.625682 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.633436 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.643234 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: E0130 10:12:02.643400 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.647290 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.654302 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.679807 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.696815 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.708400 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.734150 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.746054 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.749732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.749908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.750059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.750183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.750340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.762115 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.782344 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.797505 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:02Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.853760 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.957081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.957622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.957873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.958060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:02 crc kubenswrapper[4984]: I0130 10:12:02.958187 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:02Z","lastTransitionTime":"2026-01-30T10:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061692 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.061772 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089713 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089806 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089727 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.089944 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.089729 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.090123 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.090317 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:03 crc kubenswrapper[4984]: E0130 10:12:03.090466 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.093106 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:32:09.272993888 +0000 UTC Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164754 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.164892 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268718 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268754 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.268786 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.301751 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d" exitCode=0 Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.301838 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.335576 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.356557 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370879 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.370914 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.374169 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.392034 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.411283 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.457186 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473285 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473301 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.473310 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.508859 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.523525 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.535710 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.546567 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.556168 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.571178 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.574986 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.582846 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.591944 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.602455 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.617441 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:03Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.677513 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780315 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780387 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780411 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.780429 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.884662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.885149 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:03 crc kubenswrapper[4984]: I0130 10:12:03.987698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:03Z","lastTransitionTime":"2026-01-30T10:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.089921 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.093366 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:39:45.948343904 +0000 UTC Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192658 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.192686 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294764 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.294798 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.307699 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.312056 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.322674 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.338808 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.355887 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.380574 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.398951 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.414958 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.435962 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.454896 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.470855 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.484374 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501357 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501715 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.501812 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.514657 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.525144 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.541312 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.558363 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.574891 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.595539 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604729 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.604772 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.620276 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.647074 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.685422 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.701093 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706863 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706931 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.706942 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.722908 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.740458 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.755133 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.767848 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.767997 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768034 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.76801428 +0000 UTC m=+37.334318104 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.768063 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768096 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.768107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.768151 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768171 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768149323 +0000 UTC m=+37.334453187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768278 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768296 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768311 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768341 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768331917 +0000 UTC m=+37.334635741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768393 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768421 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768412749 +0000 UTC m=+37.334716573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768471 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768484 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768495 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.768521 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.768512861 +0000 UTC m=+37.334816685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.776601 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.794932 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809233 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809262 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809282 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.809295 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.810645 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.824142 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.841704 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.854719 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.865915 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.868572 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.868803 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: E0130 10:12:04.868968 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:12.868937597 +0000 UTC m=+37.435241441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.878165 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.888934 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911728 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:04 crc kubenswrapper[4984]: I0130 10:12:04.911753 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:04Z","lastTransitionTime":"2026-01-30T10:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.014932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.014988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.015010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.015033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.015048 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089852 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089974 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090041 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089859 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090097 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090177 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.089863 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:05 crc kubenswrapper[4984]: E0130 10:12:05.090259 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.094099 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:08:03.374743802 +0000 UTC Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.117178 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.220415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.318418 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167" exitCode=0 Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.318487 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.319471 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.319508 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.319530 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324460 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324528 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.324541 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.345732 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.350678 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.353730 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.360987 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.374916 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.391928 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.403121 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.417475 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427036 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427092 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427102 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.427750 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.440588 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.450630 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.460551 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.472547 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.484229 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.493716 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.504028 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529817 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529835 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529859 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.529878 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.530362 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.551465 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.561272 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.573776 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.588296 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.600476 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.610018 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.630129 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632403 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.632429 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.644675 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.660336 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.672174 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.681696 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.693473 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.709589 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.725747 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.734971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.735026 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.736769 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.748577 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.789438 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:05Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.836474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.836696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.836922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.837034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.837122 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.877899 4984 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939330 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939348 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:05 crc kubenswrapper[4984]: I0130 10:12:05.939394 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:05Z","lastTransitionTime":"2026-01-30T10:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042779 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.042842 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.096195 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 20:00:34.730837783 +0000 UTC Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.116609 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.142936 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.145727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.145795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.145814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.146353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.146415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.158129 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.177304 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.237134 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.248894 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.249540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.249562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.250138 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.250415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.251699 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.265666 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.279026 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.290835 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.301766 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.317983 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.327030 4984 generic.go:334] "Generic (PLEG): container finished" podID="007eb083-e87a-44f4-ab1b-7ad0ef8c8c19" containerID="9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1" exitCode=0 Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.327109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerDied","Data":"9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.330742 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.345653 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.353707 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.365994 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.394665 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.435866 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456141 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.456150 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.471589 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.511046 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558186 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558194 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.558217 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.561107 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.589984 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.630430 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660157 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.660238 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.671371 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.727364 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.761706 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763181 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.763337 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.792085 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.831278 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866268 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866280 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.866307 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.876461 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.910573 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.953380 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968414 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.968442 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:06Z","lastTransitionTime":"2026-01-30T10:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:06 crc kubenswrapper[4984]: I0130 10:12:06.995505 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.035828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071018 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071232 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071333 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.071475 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.075448 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.089997 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090131 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.090556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090632 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.090700 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090774 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.090828 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:07 crc kubenswrapper[4984]: E0130 10:12:07.090896 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.096872 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:41:15.827175223 +0000 UTC Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173660 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.173682 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276345 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.276355 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.331948 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/0.log" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.336476 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f" exitCode=1 Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.336613 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.337223 4984 scope.go:117] "RemoveContainer" containerID="c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.341567 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" event={"ID":"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19","Type":"ContainerStarted","Data":"44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.355101 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.370883 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.379933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.379981 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.379994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.380012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.380025 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.387389 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.402471 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.415622 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.465201 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.481627 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482806 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.482915 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.499309 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.517394 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.529958 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.543561 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.560356 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586332 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586363 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.586376 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.601368 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.649330 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.673398 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689941 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689963 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.689988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.690007 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.712537 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.753176 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.793511 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.796692 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.832173 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.872974 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896747 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.896899 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.916006 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.964039 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999047 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:07Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999816 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999921 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:07 crc kubenswrapper[4984]: I0130 10:12:07.999948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:07.999959 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:07Z","lastTransitionTime":"2026-01-30T10:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.036060 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.075157 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.097765 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:32:46.966474103 +0000 UTC Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.102787 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.127887 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.175239 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.192086 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204945 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.204956 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.233868 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.279271 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.307826 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.322818 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.345657 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/0.log" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.348019 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.348389 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.352475 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.395457 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.409980 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.433819 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.481647 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.512412 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.518101 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.561162 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.591689 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615066 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.615152 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.635680 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.672139 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.710415 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717792 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.717833 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.757730 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.792416 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820593 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820602 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.820626 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.830438 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.872309 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.914845 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.923156 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:08Z","lastTransitionTime":"2026-01-30T10:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.956753 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:08 crc kubenswrapper[4984]: I0130 10:12:08.998014 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:08Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029561 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029707 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.029731 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090083 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090119 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090160 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090268 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.090287 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090412 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090504 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.090629 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.097892 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:40:08.857782256 +0000 UTC Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.132869 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.235965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.236041 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339958 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339970 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.339978 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.351666 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.352101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/0.log" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.355400 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" exitCode=1 Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.355432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.355460 4984 scope.go:117] "RemoveContainer" containerID="c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.356457 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:09 crc kubenswrapper[4984]: E0130 10:12:09.356630 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.376902 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.388793 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.406505 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.420339 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.432149 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.447917 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.448002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.448331 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.468738 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.480124 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.491686 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.504384 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.513511 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.524119 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.539286 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551192 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.551388 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.562423 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.593927 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.635784 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.653376 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.755973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756129 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.756152 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.798369 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72"] Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.798988 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.802101 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.802675 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.817040 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.830345 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.845828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858384 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858430 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.858454 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.867876 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1401539b58f08ac4058df35278e82d5ecea384c8296b7aaa9a603e509e0975f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"message\\\":\\\"208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 10:12:07.200447 6248 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 10:12:07.200457 6248 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 10:12:07.200468 6248 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 10:12:07.200485 6248 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 10:12:07.200510 6248 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 10:12:07.200531 6248 factory.go:656] Stopping watch factory\\\\nI0130 10:12:07.200541 6248 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 10:12:07.200548 6248 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 10:12:07.200561 6248 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 10:12:07.200576 6248 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 10:12:07.200696 6248 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200810 6248 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 10:12:07.200841 6248 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.888678 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.913529 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928173 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928221 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6bcf749c-5a91-4939-9805-775678104b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928333 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.928374 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnsd\" (UniqueName: \"kubernetes.io/projected/6bcf749c-5a91-4939-9805-775678104b43-kube-api-access-qrnsd\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.951110 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963648 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963753 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.963802 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:09Z","lastTransitionTime":"2026-01-30T10:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:09 crc kubenswrapper[4984]: I0130 10:12:09.995225 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:09Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029062 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029135 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6bcf749c-5a91-4939-9805-775678104b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029197 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.029299 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnsd\" (UniqueName: \"kubernetes.io/projected/6bcf749c-5a91-4939-9805-775678104b43-kube-api-access-qrnsd\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.030747 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.031074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6bcf749c-5a91-4939-9805-775678104b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.031710 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.036913 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6bcf749c-5a91-4939-9805-775678104b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.059536 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnsd\" (UniqueName: \"kubernetes.io/projected/6bcf749c-5a91-4939-9805-775678104b43-kube-api-access-qrnsd\") pod \"ovnkube-control-plane-749d76644c-9ml72\" (UID: \"6bcf749c-5a91-4939-9805-775678104b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.066708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.066955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.066978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.067005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.067026 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.092576 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.098456 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:24:04.402595199 +0000 UTC Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.122291 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.140485 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169051 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.169091 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.172595 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.210303 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.253326 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275142 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275158 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.275170 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.293120 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.335013 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.363206 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" event={"ID":"6bcf749c-5a91-4939-9805-775678104b43","Type":"ContainerStarted","Data":"8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.363275 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" event={"ID":"6bcf749c-5a91-4939-9805-775678104b43","Type":"ContainerStarted","Data":"8b738624a6a0db2711461f6dc1648c59fe76241c9b4e04fabb15e662c690eceb"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.366807 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.371887 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.373567 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:10 crc kubenswrapper[4984]: E0130 10:12:10.373820 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.377237 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.413028 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.450240 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479508 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479747 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.479935 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.480031 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.490231 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.532429 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.576041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583621 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.583677 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.614014 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.650781 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685707 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685771 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.685818 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.691503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.735900 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.772626 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.788669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789465 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.789659 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.815305 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.869298 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.892453 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.899883 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.934103 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.971825 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:10Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.994720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.994989 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.995088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.995174 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:10 crc kubenswrapper[4984]: I0130 10:12:10.995289 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:10Z","lastTransitionTime":"2026-01-30T10:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.012549 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.064628 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.089852 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090056 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.090135 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.090194 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090321 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.090462 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090717 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:11 crc kubenswrapper[4984]: E0130 10:12:11.090829 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098073 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098106 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.098119 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.099326 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 21:45:50.87602382 +0000 UTC Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200400 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.200541 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303218 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.303319 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.376656 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" event={"ID":"6bcf749c-5a91-4939-9805-775678104b43","Type":"ContainerStarted","Data":"556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.391000 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.402817 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409733 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.409760 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.418936 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.428789 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.441566 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.450486 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.459532 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.470718 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.483776 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.494173 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.506774 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511365 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511415 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.511443 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.535828 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.572016 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.609823 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613366 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613379 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.613413 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.653123 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.698420 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715802 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715818 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.715830 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.735636 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:11Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.817991 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.818072 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:11 crc kubenswrapper[4984]: I0130 10:12:11.921283 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:11Z","lastTransitionTime":"2026-01-30T10:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.024856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.024930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.024954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.025175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.025198 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.102214 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:16:16.808139288 +0000 UTC Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.127334 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229815 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.229896 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333528 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.333990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.436902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.436965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.436983 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.437061 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.437084 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.539701 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.539982 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.539995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.540011 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.540023 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.644698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.648141 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.670099 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.674529 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.695016 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.702422 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.721334 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725537 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.725669 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.745184 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749603 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.749803 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.771046 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:12Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.771185 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.772798 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.857966 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858124 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858203 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.858157411 +0000 UTC m=+53.424461275 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858316 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858348 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858365 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858382 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858425 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.858404927 +0000 UTC m=+53.424708821 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858494 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.858549 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858647 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858706 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858763 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858781 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.858787 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.859384 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.859346288 +0000 UTC m=+53.425650172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.859440 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.85941314 +0000 UTC m=+53.425717004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.859476 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.859459951 +0000 UTC m=+53.425763815 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876182 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876223 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.876240 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.959205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.959451 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: E0130 10:12:12.959564 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:12:28.959539009 +0000 UTC m=+53.525842863 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978871 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:12 crc kubenswrapper[4984]: I0130 10:12:12.978893 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:12Z","lastTransitionTime":"2026-01-30T10:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.082860 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090077 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090132 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090161 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.090077 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090279 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090430 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090595 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:13 crc kubenswrapper[4984]: E0130 10:12:13.090765 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.103020 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:19:14.920434323 +0000 UTC Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186428 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186482 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.186502 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.289956 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.392799 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495285 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.495326 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598356 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.598399 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.701947 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702244 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.702332 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.805927 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.806071 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908849 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908940 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908959 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:13 crc kubenswrapper[4984]: I0130 10:12:13.908972 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:13Z","lastTransitionTime":"2026-01-30T10:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.011954 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.090476 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.104051 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:53:33.530299708 +0000 UTC Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116584 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.116626 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218914 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.218949 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321066 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321083 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.321117 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.392443 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.395462 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.396523 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.420618 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.424979 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.436319 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.448528 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.461121 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.476400 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.489370 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.506952 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.519799 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527304 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.527356 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.535155 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.548608 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.562466 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.584847 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.599225 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.611947 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.623415 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635043 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.635083 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.637679 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.672979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738180 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.738216 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841116 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.841145 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:14 crc kubenswrapper[4984]: I0130 10:12:14.943910 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:14Z","lastTransitionTime":"2026-01-30T10:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046692 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.046756 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089272 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089320 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089325 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.089281 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089444 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089570 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089606 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:15 crc kubenswrapper[4984]: E0130 10:12:15.089689 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.104158 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:26:00.481509751 +0000 UTC Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149136 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.149245 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.252864 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355598 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.355627 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457585 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.457670 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560314 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560345 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.560357 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662639 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.662657 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765415 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.765443 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867771 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.867799 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970786 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970880 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:15 crc kubenswrapper[4984]: I0130 10:12:15.970897 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:15Z","lastTransitionTime":"2026-01-30T10:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076496 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076571 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.076728 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.077485 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.102598 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.104722 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:02:40.714001082 +0000 UTC Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.116850 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.128837 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.140406 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.155116 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.173810 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.179508 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.195385 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.211689 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.222283 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.237065 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.251069 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.265692 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281566 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281504 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281579 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.281718 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.295016 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.305075 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.317019 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.331822 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384329 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.384354 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.486911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487288 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.487458 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590508 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590590 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.590633 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.693920 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.693976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.693992 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.694015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.694029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.797242 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900728 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:16 crc kubenswrapper[4984]: I0130 10:12:16.900743 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:16Z","lastTransitionTime":"2026-01-30T10:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.003528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090142 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090239 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090143 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.090372 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.090458 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.090568 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.090948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:17 crc kubenswrapper[4984]: E0130 10:12:17.091228 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.105010 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:03:31.253546697 +0000 UTC Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108270 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.108394 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211254 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211304 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.211361 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.314865 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.417825 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.418320 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521756 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.521835 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624366 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624392 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.624413 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726737 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726751 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.726762 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829136 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.829165 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932363 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932388 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:17 crc kubenswrapper[4984]: I0130 10:12:17.932405 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:17Z","lastTransitionTime":"2026-01-30T10:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.034900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.034970 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.034999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.035028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.035053 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.105227 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:33:59.812487889 +0000 UTC Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138318 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.138379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241527 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.241571 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344641 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.344697 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446927 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.446941 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550401 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.550554 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653770 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653881 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.653901 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757242 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.757318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861316 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.861379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964549 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:18 crc kubenswrapper[4984]: I0130 10:12:18.964774 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:18Z","lastTransitionTime":"2026-01-30T10:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067524 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067566 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.067583 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089263 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089352 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089357 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.089378 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.089558 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.089716 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.090028 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:19 crc kubenswrapper[4984]: E0130 10:12:19.090218 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.106338 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:59:13.555754479 +0000 UTC Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171058 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.171661 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.275533 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377887 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.377901 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.480881 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.481386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.481665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.481997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.482356 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.586263 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.586649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.586866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.587070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.587318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.691178 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794772 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794832 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.794885 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897956 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:19 crc kubenswrapper[4984]: I0130 10:12:19.897990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:19Z","lastTransitionTime":"2026-01-30T10:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.001507 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.104512 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.107301 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:14:31.932290508 +0000 UTC Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.207911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208147 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.208537 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311747 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311764 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.311808 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415120 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.415231 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.517599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518011 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518254 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.518727 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623396 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623459 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.623503 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.726515 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.726951 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.727031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.727117 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.727177 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830115 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830164 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.830184 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.933803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934169 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:20 crc kubenswrapper[4984]: I0130 10:12:20.934710 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:20Z","lastTransitionTime":"2026-01-30T10:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038169 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038241 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038262 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038318 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.038334 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089823 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089918 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089852 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.089822 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090041 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090120 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090283 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:21 crc kubenswrapper[4984]: E0130 10:12:21.090401 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.107804 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:18:39.301138004 +0000 UTC Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.141391 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245352 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.245367 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348722 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.348734 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451512 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451527 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451548 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.451561 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555041 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555128 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.555142 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.657873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658011 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.658066 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.761452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.761739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.761923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.762149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.762366 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.867469 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970624 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:21 crc kubenswrapper[4984]: I0130 10:12:21.970661 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:21Z","lastTransitionTime":"2026-01-30T10:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073141 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073167 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.073178 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.108330 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:35:15.758652209 +0000 UTC Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175569 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175610 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.175628 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278282 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278351 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.278382 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.381340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484000 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.484076 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.586932 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689751 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.689761 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.792585 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.896555 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917520 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.917559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: E0130 10:12:22.932093 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:22Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937253 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.937329 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: E0130 10:12:22.957725 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:22Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962213 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.962324 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:22 crc kubenswrapper[4984]: E0130 10:12:22.979963 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:22Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:22 crc kubenswrapper[4984]: I0130 10:12:22.985312 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:22Z","lastTransitionTime":"2026-01-30T10:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.005096 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:23Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.009359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.023956 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:23Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.024175 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.026559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089135 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089163 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089305 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.089379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.089494 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.089761 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.089924 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:23 crc kubenswrapper[4984]: E0130 10:12:23.090029 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.108608 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:08:02.189983964 +0000 UTC Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.129924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.129976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.129996 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.130020 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.130037 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.233945 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234372 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234808 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.234937 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.338888 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.443396 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546545 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.546563 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649083 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649144 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.649175 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.752667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.752954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.753075 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.753172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.753235 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.856721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.857790 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960580 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:23 crc kubenswrapper[4984]: I0130 10:12:23.960601 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:23Z","lastTransitionTime":"2026-01-30T10:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063907 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063931 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.063940 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.108903 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:18:40.946832977 +0000 UTC Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.165990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.269283 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.371417 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473482 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.473510 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575481 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.575593 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678556 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678602 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678642 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.678660 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781641 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.781685 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884897 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.884974 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987423 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987446 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:24 crc kubenswrapper[4984]: I0130 10:12:24.987499 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:24Z","lastTransitionTime":"2026-01-30T10:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089564 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089618 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089566 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.089700 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.089871 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.090058 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.090227 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:25 crc kubenswrapper[4984]: E0130 10:12:25.090464 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.090987 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091114 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.091489 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.109896 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:01:03.670637822 +0000 UTC Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.193509 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296269 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.296331 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.398946 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399037 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.399075 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.441335 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.445320 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.445913 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.468083 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.500349 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501352 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.501476 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.517653 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.542834 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.557792 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.587053 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604078 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604142 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.604153 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.605943 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.624940 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.639843 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.648722 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.657180 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.668490 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.677466 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.690937 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.707936 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.708965 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.721120 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.733761 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:25Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.809936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.809995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.810007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.810034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.810046 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:25 crc kubenswrapper[4984]: I0130 10:12:25.911958 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:25Z","lastTransitionTime":"2026-01-30T10:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.014176 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.108916 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.109995 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:21:08.564387242 +0000 UTC Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117106 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.117123 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.129915 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.147249 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.161021 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.181300 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.195960 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.215233 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218872 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218883 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.218893 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.234983 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.259291 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.275616 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.289817 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.309098 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.319990 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.320750 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.332516 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.346341 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.359407 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.368826 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.422528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.450349 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.451225 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/1.log" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454200 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" exitCode=1 Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454296 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454350 4984 scope.go:117] "RemoveContainer" containerID="68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.454868 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:26 crc kubenswrapper[4984]: E0130 10:12:26.455014 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.472157 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.491913 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c6d5f9b3be32a532165a6faedd3d255c494a72b4dc4700a9ce49de1622447c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:08Z\\\",\\\"message\\\":\\\"-alerter-4ln5h in node crc\\\\nI0130 10:12:08.449072 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0130 10:12:08.449078 6435 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0130 10:12:08.449079 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0130 10:12:08.449085 6435 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nF0130 10:12:08.449088 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.509169 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.521184 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526878 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.526936 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.532392 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.542888 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.553890 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.564061 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.576029 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.589379 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.601008 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.612984 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.626626 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.629379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.639337 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.651841 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.664336 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.676621 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.732507 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836361 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.836518 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939209 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:26 crc kubenswrapper[4984]: I0130 10:12:26.939348 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:26Z","lastTransitionTime":"2026-01-30T10:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042605 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.042621 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.089775 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.089871 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.089888 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.090007 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090002 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090164 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090216 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.090307 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.110857 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:49:57.567798732 +0000 UTC Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144890 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144907 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.144949 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248508 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.248539 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.351924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.351990 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.352010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.352034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.352050 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.454628 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.459089 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.462797 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:27 crc kubenswrapper[4984]: E0130 10:12:27.462936 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.479304 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.492227 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.514639 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.524860 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.539228 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557967 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.557998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.558372 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.558009 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.571665 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.590362 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.601487 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.615886 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.628997 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.640644 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.653178 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661787 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661806 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.661819 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.665207 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.678959 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.696579 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.720010 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:27Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764816 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764831 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.764860 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868270 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.868283 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971238 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:27 crc kubenswrapper[4984]: I0130 10:12:27.971359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:27Z","lastTransitionTime":"2026-01-30T10:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.074284 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.111789 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:32:07.613550752 +0000 UTC Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.176838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177579 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.177781 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280527 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.280653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384384 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384465 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.384507 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487725 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.487768 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590364 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590374 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.590398 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700569 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.700581 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.803705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.803988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.804149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.804353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.804522 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908217 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.908317 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:28Z","lastTransitionTime":"2026-01-30T10:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939450 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939584 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.939555025 +0000 UTC m=+85.505858879 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939714 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:28 crc kubenswrapper[4984]: I0130 10:12:28.939799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939842 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939870 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939889 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939923 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939948 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.939929393 +0000 UTC m=+85.506233247 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.939973 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.939959544 +0000 UTC m=+85.506263398 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940055 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940077 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940097 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940152 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.940136468 +0000 UTC m=+85.506440322 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940208 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:28 crc kubenswrapper[4984]: E0130 10:12:28.940285 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:00.9402348 +0000 UTC m=+85.506538654 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.011353 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.040734 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.041043 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.041391 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:01.041317451 +0000 UTC m=+85.607621315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.090135 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.090335 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.090824 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.090947 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.091064 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.091190 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.091307 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:29 crc kubenswrapper[4984]: E0130 10:12:29.091392 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.111969 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:53:56.134007885 +0000 UTC Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114480 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.114523 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217629 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.217680 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.300978 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.315402 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.320937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321348 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.321614 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.324395 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.339819 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.355717 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.375364 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.399909 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424711 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424761 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424792 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.424806 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.431093 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.443020 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.455891 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.475583 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.492641 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.505472 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.519674 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.527408 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.538503 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.552825 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.566336 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.577919 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.589407 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629423 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629465 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.629480 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.684839 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.698320 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.713889 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.726325 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.732227 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.737994 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.749885 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.770699 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.783378 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.800763 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.813371 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.826190 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.834460 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.840386 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.860382 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.885524 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.904974 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.926954 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936828 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.936888 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:29Z","lastTransitionTime":"2026-01-30T10:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.946514 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.957698 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:29 crc kubenswrapper[4984]: I0130 10:12:29.971400 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:29Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.039979 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040079 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.040131 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.113035 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:58:32.0404794 +0000 UTC Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.143244 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246401 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.246466 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349117 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.349179 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452585 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.452622 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555351 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.555365 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658131 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658202 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.658215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760383 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.760418 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863675 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.863716 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:30 crc kubenswrapper[4984]: I0130 10:12:30.966238 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:30Z","lastTransitionTime":"2026-01-30T10:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.068873 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089620 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089664 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089638 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.089769 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.089638 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.089924 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.089969 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:31 crc kubenswrapper[4984]: E0130 10:12:31.090073 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.113747 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:59:19.253549354 +0000 UTC Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.170987 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171027 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.171065 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272783 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272881 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.272899 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.374955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.375064 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.477876 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580605 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.580619 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683630 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.683688 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787100 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.787174 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890792 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.890874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993631 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:31 crc kubenswrapper[4984]: I0130 10:12:31.993653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:31Z","lastTransitionTime":"2026-01-30T10:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096307 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096357 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.096374 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.113914 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:35:06.007526705 +0000 UTC Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.198662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.198937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.199039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.199167 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.199310 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302523 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.302567 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.405220 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507380 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.507415 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.611962 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.612066 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714671 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714710 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.714750 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816836 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816850 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.816859 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919197 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:32 crc kubenswrapper[4984]: I0130 10:12:32.919216 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:32Z","lastTransitionTime":"2026-01-30T10:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021610 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.021653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.089325 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.089402 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.089456 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.089418 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.089586 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.089730 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.090060 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.090371 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.114933 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:23:01.142018713 +0000 UTC Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124337 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124372 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.124409 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.227610 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.288930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.288973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.288984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.289002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.289015 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.311316 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.316490 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.334088 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340117 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.340193 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.355401 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359619 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.359676 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.372060 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376083 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.376092 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.389354 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:33Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:33 crc kubenswrapper[4984]: E0130 10:12:33.389518 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390757 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.390786 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493654 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493735 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.493777 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596811 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.596882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.700147 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.803919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.803977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.804000 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.804028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.804049 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907026 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907067 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:33 crc kubenswrapper[4984]: I0130 10:12:33.907099 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:33Z","lastTransitionTime":"2026-01-30T10:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009940 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.009961 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112629 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.112699 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.116028 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:06:12.109870917 +0000 UTC Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215078 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215112 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.215149 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317757 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317772 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.317782 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419735 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419809 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.419850 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.522144 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624471 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.624513 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726437 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.726467 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829147 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829166 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.829179 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:34 crc kubenswrapper[4984]: I0130 10:12:34.932961 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:34Z","lastTransitionTime":"2026-01-30T10:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036202 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036300 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036333 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.036345 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.089633 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.089676 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.089887 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.089881 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.090002 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.090068 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.090132 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:35 crc kubenswrapper[4984]: E0130 10:12:35.090193 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.116773 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:22:00.23134162 +0000 UTC Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139213 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139236 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.139287 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241817 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.241921 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344485 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.344614 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.446911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447014 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.447128 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.550284 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652742 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.652932 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755220 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755229 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755269 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.755281 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.858715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962593 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962613 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:35 crc kubenswrapper[4984]: I0130 10:12:35.962667 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:35Z","lastTransitionTime":"2026-01-30T10:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.065402 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.108708 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.117107 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:54:35.880128921 +0000 UTC Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.121389 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.139838 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.157425 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.171563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.171644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.171712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.173121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.173065 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.173491 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.193024 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.209579 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.226395 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.246519 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.275015 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.276535 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.308554 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.329406 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.347196 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.362123 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378746 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.378802 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.386073 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.399357 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.408818 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.422536 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:36Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480783 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480807 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.480824 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.583998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.584007 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685963 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.685994 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.788590 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.890942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.890997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.891017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.891040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.891057 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993409 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:36 crc kubenswrapper[4984]: I0130 10:12:36.993521 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:36Z","lastTransitionTime":"2026-01-30T10:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089472 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089473 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089475 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089749 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089590 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.089490 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:37 crc kubenswrapper[4984]: E0130 10:12:37.089776 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095946 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095962 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.095973 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.118077 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:05:13.186726731 +0000 UTC Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.198349 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301380 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301480 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.301515 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405579 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.405694 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507093 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507137 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507151 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507164 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.507173 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.609980 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712725 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.712773 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.817918 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818400 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.818479 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:37 crc kubenswrapper[4984]: I0130 10:12:37.920684 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:37Z","lastTransitionTime":"2026-01-30T10:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.023895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.023977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.024010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.024039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.024060 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.118333 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:25:43.604111783 +0000 UTC Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126138 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.126161 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229213 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229240 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.229274 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332211 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332283 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332298 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.332308 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.435840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539194 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.539339 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642447 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642525 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.642619 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745969 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.745994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.746016 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848561 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.848674 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951621 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:38 crc kubenswrapper[4984]: I0130 10:12:38.951774 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:38Z","lastTransitionTime":"2026-01-30T10:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.053797 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089223 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089240 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089285 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.089236 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089370 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089508 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089659 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:39 crc kubenswrapper[4984]: E0130 10:12:39.089706 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.119329 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:45:47.713691812 +0000 UTC Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.155985 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.259468 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362808 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362830 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.362848 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464765 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464852 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.464867 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567518 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.567530 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.670955 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773186 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773202 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.773214 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.875523 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977598 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977631 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:39 crc kubenswrapper[4984]: I0130 10:12:39.977653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:39Z","lastTransitionTime":"2026-01-30T10:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.079997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.080496 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.119724 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:34:39.923055121 +0000 UTC Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183559 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.183630 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.286986 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287066 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.287095 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.389540 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492967 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.492979 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.596398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.596849 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.596955 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.597062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.597162 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.699875 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.802200 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.904933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.904995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.905015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.905040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:40 crc kubenswrapper[4984]: I0130 10:12:40.905057 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:40Z","lastTransitionTime":"2026-01-30T10:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008164 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.008352 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.093311 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.093456 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.093659 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.093722 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.093713 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.094007 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.094369 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.094684 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.094985 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:41 crc kubenswrapper[4984]: E0130 10:12:41.095152 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.110676 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.120787 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:00:19.182518463 +0000 UTC Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.212917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.212974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.212996 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.213025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.213045 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.315433 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417903 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417933 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.417947 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.519932 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.625659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.625961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.626033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.626111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.626173 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.729293 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831414 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.831451 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.933930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934549 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:41 crc kubenswrapper[4984]: I0130 10:12:41.934634 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:41Z","lastTransitionTime":"2026-01-30T10:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037047 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037108 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.037137 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.121511 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:25:01.269652653 +0000 UTC Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.139696 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242314 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242388 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242409 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.242453 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.344694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345181 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345269 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.345328 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447342 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.447419 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550296 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.550320 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652217 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652264 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652288 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.652298 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754671 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754718 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754733 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.754742 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.857713 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.960886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.960965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.960988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.961015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:42 crc kubenswrapper[4984]: I0130 10:12:42.961035 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:42Z","lastTransitionTime":"2026-01-30T10:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.064501 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089164 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089215 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.089180 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089380 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089528 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089679 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.089785 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.122507 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:20:57.894448535 +0000 UTC Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.166998 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.269704 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.372642 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475272 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.475330 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575153 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575198 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.575241 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.593734 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.597944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.597985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.597994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.598007 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.598016 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.611713 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615268 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.615404 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.633843 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.637856 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.656839 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661430 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.661468 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.677944 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:43Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:43 crc kubenswrapper[4984]: E0130 10:12:43.678112 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679770 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.679779 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782603 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.782624 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885200 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885229 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.885453 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:43 crc kubenswrapper[4984]: I0130 10:12:43.988575 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:43Z","lastTransitionTime":"2026-01-30T10:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.091829 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.122799 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:31:57.753617379 +0000 UTC Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194271 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.194282 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.296546 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398302 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398410 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.398440 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.500721 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602451 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.602479 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705266 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705275 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.705301 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807976 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.807998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.808007 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910480 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:44 crc kubenswrapper[4984]: I0130 10:12:44.910531 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:44Z","lastTransitionTime":"2026-01-30T10:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.012599 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089276 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089319 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089386 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089509 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.089559 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089720 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089752 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:45 crc kubenswrapper[4984]: E0130 10:12:45.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.114428 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.123517 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:10:31.911648125 +0000 UTC Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216620 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.216630 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318771 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.318848 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420847 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.420937 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516515 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/0.log" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516559 4984 generic.go:334] "Generic (PLEG): container finished" podID="0c5bace6-b520-4c9e-be10-a66fea4f9130" containerID="435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e" exitCode=1 Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516589 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerDied","Data":"435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.516925 4984 scope.go:117] "RemoveContainer" containerID="435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.522778 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.528360 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.539679 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.552026 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.560707 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.569595 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.582946 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.591351 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.603743 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.615013 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625292 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625328 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.625416 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.631571 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.641583 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.657496 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.673774 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.684944 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.695592 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.707689 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.717919 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727423 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727810 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.727903 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.729858 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:45Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830339 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.830914 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:45 crc kubenswrapper[4984]: I0130 10:12:45.933747 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:45Z","lastTransitionTime":"2026-01-30T10:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.038524 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.101211 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.113290 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.124395 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:46:31.588911658 +0000 UTC Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.126206 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140280 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140365 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140379 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.140409 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.144875 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.167302 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.183976 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.196586 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.206873 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.217391 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.226755 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.236952 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242447 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242463 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.242476 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.254885 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.271845 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.281697 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.292241 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.306050 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.319541 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.331109 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345115 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345176 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345194 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345218 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.345237 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447398 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.447486 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.522196 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/0.log" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.522280 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.536714 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.547775 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549887 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.549927 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.559329 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.580199 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.601003 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.610979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.621711 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.630306 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.641295 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.648806 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651872 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.651986 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.657809 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.671010 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.685070 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.695298 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.708701 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.721774 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.731452 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.742849 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:46Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754624 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.754702 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857600 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.857646 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960571 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:46 crc kubenswrapper[4984]: I0130 10:12:46.960617 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:46Z","lastTransitionTime":"2026-01-30T10:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063235 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063274 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.063304 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090147 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090173 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090194 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.090206 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090288 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090371 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090497 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:47 crc kubenswrapper[4984]: E0130 10:12:47.090583 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.124787 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:39:02.502988753 +0000 UTC Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166283 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166351 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.166360 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269679 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269703 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.269720 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372460 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.372624 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474607 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.474615 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577149 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577165 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.577176 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680067 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.680114 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.782993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.783065 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885518 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.885530 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:47 crc kubenswrapper[4984]: I0130 10:12:47.987709 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:47Z","lastTransitionTime":"2026-01-30T10:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089884 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.089963 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.125046 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:37:14.350439591 +0000 UTC Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192879 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192930 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.192955 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295568 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.295633 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.398572 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501018 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.501098 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603368 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603476 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.603524 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706300 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706372 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.706380 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809267 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809319 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.809328 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:48 crc kubenswrapper[4984]: I0130 10:12:48.911201 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:48Z","lastTransitionTime":"2026-01-30T10:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.013615 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090136 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090164 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090150 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090276 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.090308 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090376 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090546 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:49 crc kubenswrapper[4984]: E0130 10:12:49.090576 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.116978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117013 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.117034 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.125582 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:28:33.698310343 +0000 UTC Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.219938 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.219988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.220002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.220029 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.220045 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.323215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425868 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425878 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.425907 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.528219 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630299 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630335 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.630375 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732091 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732126 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732137 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.732161 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834556 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.834566 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:49 crc kubenswrapper[4984]: I0130 10:12:49.937681 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:49Z","lastTransitionTime":"2026-01-30T10:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039626 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.039671 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.126361 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:52:53.219580141 +0000 UTC Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142381 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.142418 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.244940 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.244977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.244989 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.245004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.245014 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.354891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.354978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.354995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.355017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.355034 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458179 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458450 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.458657 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560577 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.560585 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663057 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663091 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.663131 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.766215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869410 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.869484 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972120 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:50 crc kubenswrapper[4984]: I0130 10:12:50.972164 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:50Z","lastTransitionTime":"2026-01-30T10:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074850 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074865 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.074877 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.089625 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.089627 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.089718 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.089805 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.089907 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.089947 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.090196 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:51 crc kubenswrapper[4984]: E0130 10:12:51.090312 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.126829 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:01:10.219525363 +0000 UTC Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176754 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176779 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176787 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.176836 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279005 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279069 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.279091 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381625 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381676 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381690 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.381698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483546 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.483577 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.586318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.689316 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791900 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.791934 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.895101 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:51 crc kubenswrapper[4984]: I0130 10:12:51.997410 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:51Z","lastTransitionTime":"2026-01-30T10:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.099922 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.127808 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:35:04.257070332 +0000 UTC Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.202534 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305507 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.305531 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.408448 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512090 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512174 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.512241 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.614888 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718031 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.718187 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821515 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.821547 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925639 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925663 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:52 crc kubenswrapper[4984]: I0130 10:12:52.925682 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:52Z","lastTransitionTime":"2026-01-30T10:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.027994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028141 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.028163 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.089730 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.090607 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.090729 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.090930 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.090965 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.091081 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.091328 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:53 crc kubenswrapper[4984]: E0130 10:12:53.091799 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.092307 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.127986 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:41:40.58654989 +0000 UTC Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.131732 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234505 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.234635 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337704 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.337715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440871 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.440926 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544189 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.544232 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.547643 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.552007 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.554382 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.582755 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.604294 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.635377 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647702 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647825 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.647850 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.665425 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.701452 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.717595 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.736465 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750158 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.750177 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.752308 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.775703 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.804758 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.819490 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.839442 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852705 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.852815 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.856974 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.873296 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.887111 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.903215 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.917501 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.930365 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:53Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955640 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:53 crc kubenswrapper[4984]: I0130 10:12:53.955653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:53Z","lastTransitionTime":"2026-01-30T10:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058537 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058598 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.058608 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066580 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.066629 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.081532 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086159 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086192 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.086207 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.103289 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106817 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106855 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.106892 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.118551 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122818 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.122853 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.128976 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:30:31.170364875 +0000 UTC Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.133944 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137566 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137594 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.137605 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.149127 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.149243 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.160997 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161036 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161045 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.161070 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264118 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.264199 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.367637 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470701 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.470767 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.558517 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.559925 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/2.log" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.564814 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" exitCode=1 Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.564862 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.564905 4984 scope.go:117] "RemoveContainer" containerID="6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.566048 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:12:54 crc kubenswrapper[4984]: E0130 10:12:54.566368 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574483 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.574548 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.580416 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.596988 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.616229 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.634022 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.647369 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.664278 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679006 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679116 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.679140 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.680979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.700664 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.729478 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a05b9f21e43035413e45980d6624015a9491a81a8bc903ecf452621a8972c57\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:25Z\\\",\\\"message\\\":\\\"682 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-sdmkd in node crc\\\\nI0130 10:12:25.963202 6682 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:25.963223 6682 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-sdmkd] creating logical port openshift-multus_network-metrics-daemon-sdmkd for pod on switch crc\\\\nI0130 10:12:25.963243 6682 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963278 6682 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0130 10:12:25.963287 6682 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0130 10:12:25.963302 6682 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nF0130 10:12:25.963309 6682 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.754879 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.768079 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.778966 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.782975 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783023 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.783103 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.794647 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.811096 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.825635 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.840041 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.858957 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.873648 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:54Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.886597 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989559 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:54 crc kubenswrapper[4984]: I0130 10:12:54.989630 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:54Z","lastTransitionTime":"2026-01-30T10:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089131 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089221 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089293 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.089149 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089386 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089518 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.089677 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.094625 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.129308 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:09:19.203293048 +0000 UTC Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197515 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197550 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197559 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197573 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.197583 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.299921 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402619 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402716 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.402770 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505969 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.505993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.506014 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.570814 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.575121 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:12:55 crc kubenswrapper[4984]: E0130 10:12:55.575311 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.587516 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.604415 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.608923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.608978 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.608994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.609016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.609032 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.622674 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.640212 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.656626 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.670134 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.687804 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.704481 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712021 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.712186 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.720680 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.733617 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.747108 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.762219 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.785367 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.812897 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.814695 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.827116 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.842609 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.854826 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.866234 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:55Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917186 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917273 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:55 crc kubenswrapper[4984]: I0130 10:12:55.917326 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:55Z","lastTransitionTime":"2026-01-30T10:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020489 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020572 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020599 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.020616 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.107008 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123418 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.123554 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.126982 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.129825 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:01:31.424002619 +0000 UTC Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.144067 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.157624 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.174074 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.198939 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.216384 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.226635 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.232455 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.255873 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.271227 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.287146 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.319331 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329923 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.329942 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.350373 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.368770 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.391134 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.409334 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.424453 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.432943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.432999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.433016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.433046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.433064 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.441815 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:12:56Z is after 2025-08-24T17:21:41Z" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536106 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.536184 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.638990 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743531 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743564 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.743586 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846831 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.846874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949438 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949513 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949525 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:56 crc kubenswrapper[4984]: I0130 10:12:56.949535 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:56Z","lastTransitionTime":"2026-01-30T10:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052624 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.052684 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089743 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089745 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090197 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089882 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.089807 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090369 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090480 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:57 crc kubenswrapper[4984]: E0130 10:12:57.090569 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.130379 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:17:30.537997885 +0000 UTC Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156796 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.156840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.260149 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362710 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362867 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.362918 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465344 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465420 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.465487 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.568886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.568979 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.569023 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.569052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.569069 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671684 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671782 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.671840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775549 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.775566 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878866 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878883 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.878930 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986585 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.986999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:57 crc kubenswrapper[4984]: I0130 10:12:57.987043 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:57Z","lastTransitionTime":"2026-01-30T10:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.091991 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092074 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092096 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.092113 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.131564 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:57:36.493849392 +0000 UTC Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196829 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196914 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.196988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.197011 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299731 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.299805 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.402952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403025 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403051 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.403069 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506612 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506661 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.506683 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609619 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609654 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.609676 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712921 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712966 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.712999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.713013 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815783 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.815801 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:58 crc kubenswrapper[4984]: I0130 10:12:58.918879 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:58Z","lastTransitionTime":"2026-01-30T10:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020898 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020934 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020945 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.020973 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089737 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089747 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089813 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.089924 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090046 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090200 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090311 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:12:59 crc kubenswrapper[4984]: E0130 10:12:59.090394 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124814 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124847 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.124860 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.131991 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:58:27.650482125 +0000 UTC Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227938 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227951 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.227979 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330281 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.330295 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.432998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.433167 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535845 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535872 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.535882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638947 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638956 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.638982 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741567 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741645 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.741653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843908 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843966 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.843974 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.946798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:12:59 crc kubenswrapper[4984]: I0130 10:12:59.947310 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:12:59Z","lastTransitionTime":"2026-01-30T10:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.050760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051009 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051077 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051146 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.051212 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.132109 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:34:17.683163391 +0000 UTC Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.154959 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.155103 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270821 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270934 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.270951 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.373912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.373968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.373985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.374012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.374029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476714 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476765 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.476807 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579633 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.579650 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.682136 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784715 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784737 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.784753 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888627 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888733 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.888762 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991455 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991470 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:00 crc kubenswrapper[4984]: I0130 10:13:00.991484 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:00Z","lastTransitionTime":"2026-01-30T10:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024487 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024642 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.024622409 +0000 UTC m=+149.590926233 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024640 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.024767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024791 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024821 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024834 4984 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024845 4984 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024872 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.024861846 +0000 UTC m=+149.591165670 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024913 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.024890947 +0000 UTC m=+149.591194811 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024913 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024962 4984 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.024985 4984 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.025069 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.025044301 +0000 UTC m=+149.591348165 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.025067 4984 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.025221 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.025175975 +0000 UTC m=+149.591479859 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090030 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090100 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090168 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.090464 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.090504 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.090970 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.091162 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.090710 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093713 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.093881 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.126199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.126529 4984 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: E0130 10:13:01.126652 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs podName:cec0ee98-d570-417f-a2fb-7ac19e3b25c0 nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.126621384 +0000 UTC m=+149.692925248 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs") pod "network-metrics-daemon-sdmkd" (UID: "cec0ee98-d570-417f-a2fb-7ac19e3b25c0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.133155 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:42:49.741078188 +0000 UTC Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197353 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197379 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.197425 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.299755 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402637 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402675 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402687 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402700 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.402710 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505839 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.505882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.609988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610104 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.610150 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.712688 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713024 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.713071 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815517 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.815568 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918498 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:01 crc kubenswrapper[4984]: I0130 10:13:01.918558 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:01Z","lastTransitionTime":"2026-01-30T10:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021474 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.021492 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124636 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.124798 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.133864 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:18:22.401445844 +0000 UTC Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.227961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228058 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228112 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.228138 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331195 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331290 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331308 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331342 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.331365 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433768 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.433811 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.536954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537016 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.537072 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640618 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.640641 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743669 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.743706 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846197 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846209 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.846231 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949366 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:02 crc kubenswrapper[4984]: I0130 10:13:02.949385 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:02Z","lastTransitionTime":"2026-01-30T10:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055532 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.055597 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089366 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089431 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089459 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.089467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.089605 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.089938 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:03 crc kubenswrapper[4984]: E0130 10:13:03.090113 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.134815 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:38:30.002087303 +0000 UTC Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.158924 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.261998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262073 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.262095 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365089 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365165 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365187 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365211 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.365227 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469158 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469243 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.469334 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.572500 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675444 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.675596 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779266 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.779279 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882863 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882910 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882944 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.882960 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986407 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:03 crc kubenswrapper[4984]: I0130 10:13:03.986475 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:03Z","lastTransitionTime":"2026-01-30T10:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089231 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.089318 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.135617 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:58:40.861159414 +0000 UTC Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192057 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192123 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.192163 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295932 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.295995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.296006 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399150 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399198 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.399215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458399 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.458446 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.473035 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.477922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.477980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.478000 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.478050 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.478074 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.497392 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502587 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502639 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.502687 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.519331 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523786 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523836 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523852 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.523865 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.552742 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558345 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558380 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.558397 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.578203 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:04Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:04 crc kubenswrapper[4984]: E0130 10:13:04.578409 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580478 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.580517 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684129 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.684143 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786761 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.786790 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889265 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.889329 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992679 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:04 crc kubenswrapper[4984]: I0130 10:13:04.992696 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:04Z","lastTransitionTime":"2026-01-30T10:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.089817 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.089854 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.089963 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.090033 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.090052 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.090087 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.090147 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:05 crc kubenswrapper[4984]: E0130 10:13:05.090208 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095196 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095216 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095241 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.095290 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.136384 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:39:18.803256216 +0000 UTC Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198329 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198405 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198439 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.198479 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301495 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.301533 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404210 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.404414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507403 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.507436 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609128 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.609141 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712137 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712154 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.712167 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814485 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.814505 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:05 crc kubenswrapper[4984]: I0130 10:13:05.917454 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:05Z","lastTransitionTime":"2026-01-30T10:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020642 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020696 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.020775 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.110181 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122886 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122927 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.122955 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.132401 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.136523 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:38:55.835879051 +0000 UTC Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.152293 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.173637 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.197459 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.212889 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226196 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226328 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.226346 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.234180 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.264474 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.297036 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.318592 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329348 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329377 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.329391 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.339995 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.356168 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.378004 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.389830 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.402459 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.424343 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432761 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.432779 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.445142 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.464423 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:06Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535463 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535534 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535553 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.535568 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638576 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.638613 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741699 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.741868 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844949 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.844981 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:06 crc kubenswrapper[4984]: I0130 10:13:06.948414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:06Z","lastTransitionTime":"2026-01-30T10:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.051436 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.089948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.090107 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.090299 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090285 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.090382 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090531 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090725 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:07 crc kubenswrapper[4984]: E0130 10:13:07.090801 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.137373 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:43:27.97417421 +0000 UTC Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154636 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154664 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.154675 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.257895 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.257974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.257996 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.258027 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.258050 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360883 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360939 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.360991 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.361008 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464877 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464893 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464916 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.464933 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567844 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.567947 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.568004 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671499 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.671542 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.774992 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775059 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.775126 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878711 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878772 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878805 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.878818 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982456 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:07 crc kubenswrapper[4984]: I0130 10:13:07.982579 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:07Z","lastTransitionTime":"2026-01-30T10:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086168 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086279 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.086294 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.091177 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:08 crc kubenswrapper[4984]: E0130 10:13:08.091526 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.138204 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:54:40.196181958 +0000 UTC Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.189922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.189977 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.189988 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.190008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.190021 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293427 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.293562 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396405 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396440 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396503 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.396530 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499111 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499175 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499193 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.499209 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602813 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602830 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602855 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.602874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706100 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706130 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.706144 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809086 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809098 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.809134 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913193 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:08 crc kubenswrapper[4984]: I0130 10:13:08.913231 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:08Z","lastTransitionTime":"2026-01-30T10:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016151 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016171 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.016224 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089349 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089461 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.089551 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.089641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.089821 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.090027 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:09 crc kubenswrapper[4984]: E0130 10:13:09.090099 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121825 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.121987 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.138919 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:01:43.386315757 +0000 UTC Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225318 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225336 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.225379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327822 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.327866 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431178 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431240 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431342 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.431363 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535339 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.535361 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637793 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637809 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637835 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.637854 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740684 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740711 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.740724 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844095 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.844223 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948145 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948218 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:09 crc kubenswrapper[4984]: I0130 10:13:09.948228 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:09Z","lastTransitionTime":"2026-01-30T10:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050713 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050778 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.050838 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.104294 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.139719 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:26:02.758166054 +0000 UTC Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.153502 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256795 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.256825 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360447 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.360464 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.463843 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.463999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.464077 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.464105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.464122 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567563 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567631 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.567698 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.671601 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774670 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774785 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.774804 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878450 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.878473 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981894 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981917 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:10 crc kubenswrapper[4984]: I0130 10:13:10.981928 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:10Z","lastTransitionTime":"2026-01-30T10:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085726 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.085749 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.089833 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.089861 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.089894 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090003 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.090044 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090164 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090281 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:11 crc kubenswrapper[4984]: E0130 10:13:11.090390 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.140746 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:41:06.89260533 +0000 UTC Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.189298 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.292889 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.292999 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.293019 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.293047 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.293067 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396233 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396303 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396334 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.396350 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499208 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499277 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.499317 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602166 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602224 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602234 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602272 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.602285 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706301 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706354 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.706407 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809162 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809232 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809268 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.809278 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911695 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911723 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:11 crc kubenswrapper[4984]: I0130 10:13:11.911735 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:11Z","lastTransitionTime":"2026-01-30T10:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014311 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.014351 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116918 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116986 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.116995 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.141759 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:59:45.070841098 +0000 UTC Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219541 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219589 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219604 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219626 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.219641 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.321948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322001 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.322043 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.424980 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.425069 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527570 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527584 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.527595 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630662 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630674 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630692 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.630704 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733121 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733225 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.733275 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836385 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836463 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836493 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.836505 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.939913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940038 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:12 crc kubenswrapper[4984]: I0130 10:13:12.940094 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:12Z","lastTransitionTime":"2026-01-30T10:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043347 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043396 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043412 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.043447 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.089993 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.090047 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.090018 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.090098 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090178 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090420 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090491 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:13 crc kubenswrapper[4984]: E0130 10:13:13.090576 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.142545 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:02:46.305985641 +0000 UTC Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145741 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145769 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145796 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.145808 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248422 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.248559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351885 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351924 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.351940 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454603 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.454653 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557278 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557298 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.557311 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660119 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660180 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.660214 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.765873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.765953 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.765983 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.766103 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.766747 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869167 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869277 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869340 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.869363 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972107 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972166 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:13 crc kubenswrapper[4984]: I0130 10:13:13.972175 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:13Z","lastTransitionTime":"2026-01-30T10:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074903 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074938 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074950 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.074975 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.143214 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:12:18.757166062 +0000 UTC Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177760 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177823 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177838 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177859 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.177877 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280829 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.280881 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383635 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383675 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.383691 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486870 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.486891 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588690 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588704 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.588714 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615308 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615766 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615784 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.615793 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.634734 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639280 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639325 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639338 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.639371 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.654835 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658880 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658957 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.658982 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.676821 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681240 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.681306 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.696843 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700606 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700615 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700629 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.700640 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.712893 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:14Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:14 crc kubenswrapper[4984]: E0130 10:13:14.713001 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714227 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714309 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.714344 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816417 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816439 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.816448 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918687 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918710 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:14 crc kubenswrapper[4984]: I0130 10:13:14.918721 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:14Z","lastTransitionTime":"2026-01-30T10:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020750 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.020792 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089454 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089463 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089537 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.089592 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.089731 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.089949 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.090095 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:15 crc kubenswrapper[4984]: E0130 10:13:15.090229 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123718 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123748 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.123759 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.143957 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:04:57.782447177 +0000 UTC Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226466 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226533 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.226612 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330205 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330222 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330291 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.330311 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433355 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433373 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.433414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535628 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.535671 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638820 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638943 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.638963 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742411 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742441 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.742454 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845481 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845507 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.845519 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948807 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:15 crc kubenswrapper[4984]: I0130 10:13:15.948841 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:15Z","lastTransitionTime":"2026-01-30T10:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050901 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.050919 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.109461 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.129859 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.144335 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:10:17.164121282 +0000 UTC Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.147114 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83196244-71fa-4003-aa05-0f1a7de9db9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c475de8d49f5aefa32c82d036020b47bc55061e42d5da99bb1052ef7f0ca0b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153929 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153960 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.153985 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.166040 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.182123 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.198740 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.219693 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.237383 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.253510 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257389 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257453 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257475 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.257490 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.272633 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.295907 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.321080 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.336825 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.349584 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359666 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.359680 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.361955 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.380178 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.393132 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.406979 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.425427 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:16Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462905 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.462941 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566289 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.566421 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.668819 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771238 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.771285 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873894 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873941 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873969 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.873981 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975812 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975860 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:16 crc kubenswrapper[4984]: I0130 10:13:16.975882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:16Z","lastTransitionTime":"2026-01-30T10:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077535 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077677 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.077840 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089772 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089826 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.089880 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089890 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.089908 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.089988 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.090121 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:17 crc kubenswrapper[4984]: E0130 10:13:17.090288 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.145413 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 02:57:52.65123404 +0000 UTC Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180076 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180092 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.180102 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283479 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.283518 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.386985 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387054 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.387087 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489716 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489759 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.489776 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593359 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593386 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593418 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.593440 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695804 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695848 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695859 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695873 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.695885 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798660 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.798727 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:17 crc kubenswrapper[4984]: I0130 10:13:17.901561 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:17Z","lastTransitionTime":"2026-01-30T10:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004384 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004452 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.004523 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107510 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107538 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.107559 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.146301 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:09:36.204741492 +0000 UTC Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210552 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210596 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210630 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.210657 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314063 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314081 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314105 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.314125 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417073 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.417095 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.519942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520002 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520023 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.520074 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624794 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624846 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624888 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.624907 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727442 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.727514 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830644 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.830683 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934244 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934305 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934333 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:18 crc kubenswrapper[4984]: I0130 10:13:18.934351 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:18Z","lastTransitionTime":"2026-01-30T10:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037120 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037185 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037203 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.037215 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089634 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089635 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.089740 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.089733 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.089874 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.089926 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.090315 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.090762 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:19 crc kubenswrapper[4984]: E0130 10:13:19.090930 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139354 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139800 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139853 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.139897 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.146445 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:04:44.893902351 +0000 UTC Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242726 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242766 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.242783 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346094 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346301 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.346327 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.449363 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.552578 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655091 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655101 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.655121 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758102 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758131 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.758185 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861462 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861490 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.861543 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964462 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964487 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:19 crc kubenswrapper[4984]: I0130 10:13:19.964537 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:19Z","lastTransitionTime":"2026-01-30T10:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.067899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.067964 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.067984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.068008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.068028 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.147660 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:56:23.980491353 +0000 UTC Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171152 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171237 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.171253 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274661 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274724 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274756 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.274770 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377577 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377681 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377704 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.377753 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480706 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480803 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480849 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.480871 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584494 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.584826 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687344 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687392 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687404 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.687433 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790942 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790962 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.790987 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.791005 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894067 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894122 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894156 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.894169 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997408 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997486 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997512 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997542 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:20 crc kubenswrapper[4984]: I0130 10:13:20.997564 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:20Z","lastTransitionTime":"2026-01-30T10:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090137 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090205 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.090304 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090395 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090517 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090691 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:21 crc kubenswrapper[4984]: E0130 10:13:21.090854 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100727 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100812 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100830 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.100874 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.148135 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:50:05.524434661 +0000 UTC Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204470 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204511 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.204528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307665 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.307707 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411638 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411657 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.411696 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514426 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514491 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514540 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.514558 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618188 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618229 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.618267 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721524 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721548 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.721603 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824286 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824387 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824430 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.824447 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927715 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927826 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.927936 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:21 crc kubenswrapper[4984]: I0130 10:13:21.928029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:21Z","lastTransitionTime":"2026-01-30T10:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030833 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030884 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.030927 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133605 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133667 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133688 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.133701 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.148632 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:24:22.752302153 +0000 UTC Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236621 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236652 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236660 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.236684 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340228 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340789 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.340922 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.341046 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444374 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444434 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444451 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.444491 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547516 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547601 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547630 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.547655 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651135 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651161 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651192 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.651219 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.754239 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.754673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.754904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.755134 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.755396 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859006 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859072 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859093 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.859109 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962646 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962709 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962729 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962752 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:22 crc kubenswrapper[4984]: I0130 10:13:22.962769 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:22Z","lastTransitionTime":"2026-01-30T10:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.065875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.065961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.065983 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.066010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.066026 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089710 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089708 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089712 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.089830 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090028 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090114 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090214 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:23 crc kubenswrapper[4984]: E0130 10:13:23.090430 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.149194 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:32:06.781158225 +0000 UTC Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.168919 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.168994 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.169012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.169034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.169050 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.272959 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273042 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273065 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273092 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.273115 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.375845 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.375982 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.376003 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.376028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.376041 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478758 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478882 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478913 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.478936 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582445 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.582521 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684588 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684628 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684636 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684650 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.684658 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787688 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787757 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787811 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.787832 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891428 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891708 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891743 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.891757 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995419 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995484 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995501 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995522 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:23 crc kubenswrapper[4984]: I0130 10:13:23.995538 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:23Z","lastTransitionTime":"2026-01-30T10:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099449 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099530 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099558 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.099619 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.149969 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:44:18.903990151 +0000 UTC Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203064 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203160 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203183 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203214 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.203239 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306346 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306371 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.306389 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408425 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.408487 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511431 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511504 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511525 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.511579 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614634 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614703 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.614715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717315 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717364 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.717379 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820393 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820433 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820467 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.820490 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876344 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876413 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.876475 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.898970 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.903857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.903961 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.903984 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.904012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.904028 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.917570 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923611 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.923622 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.944629 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949324 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949403 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949418 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.949446 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.968827 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973042 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973113 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973159 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.973179 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.992671 4984 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T10:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"27e6287f-3fa9-4a7b-9d27-962ff895c3d3\\\",\\\"systemUUID\\\":\\\"da0bcd04-2174-455a-abae-7839c96298f6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:24Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:24 crc kubenswrapper[4984]: E0130 10:13:24.992840 4984 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.994958 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995040 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995070 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995104 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:24 crc kubenswrapper[4984]: I0130 10:13:24.995128 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:24Z","lastTransitionTime":"2026-01-30T10:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089592 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089650 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.089654 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.089773 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.089881 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.089980 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:25 crc kubenswrapper[4984]: E0130 10:13:25.090047 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098327 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098362 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098376 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098391 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.098404 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.150465 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:30:25.526642384 +0000 UTC Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201488 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201560 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201583 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201614 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.201640 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304790 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304816 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.304826 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407330 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407424 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407469 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.407487 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.509864 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.509899 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.509958 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.510032 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.510046 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613420 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613506 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613547 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.613562 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.716798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717013 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717052 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717082 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.717105 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820068 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820212 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820231 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820323 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.820340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923685 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923697 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923712 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:25 crc kubenswrapper[4984]: I0130 10:13:25.923723 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:25Z","lastTransitionTime":"2026-01-30T10:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027215 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027310 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.027357 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.107329 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a410aa98-4908-4c3d-bca6-f7e056916e10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 10:11:50.956993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 10:11:50.957787 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1248037336/tls.crt::/tmp/serving-cert-1248037336/tls.key\\\\\\\"\\\\nI0130 10:11:56.228083 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 10:11:56.235102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 10:11:56.235129 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 10:11:56.235152 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 10:11:56.235159 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 10:11:56.244759 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 10:11:56.244849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244892 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 10:11:56.244979 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 10:11:56.245005 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 10:11:56.245033 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 10:11:56.245140 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 10:11:56.245440 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 10:11:56.247714 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.125230 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131496 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131539 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131555 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.131594 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.140032 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6tdgl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a9a337d-bc6b-4a98-8abc-7569fa4fa312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d08d8c9bf7fccb9c108f8ae65511266e975a5f2974a530f2d8c9e4c473d0a79a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqknm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6tdgl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.151231 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:49:11.216468381 +0000 UTC Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.153561 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhwz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sdmkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.173394 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"007eb083-e87a-44f4-ab1b-7ad0ef8c8c19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44cc42552192ee31faeea925243e3fa538d8c7d8cba3c488262b2626367817b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27037d8a202c4447bd147f9130d18fee72bf77249eeac2ca946fa378d575b77c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78aeea6b89d6cc61508bc517e8416f0c3eb81c518bd38422e720bc4a97e2f980\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dc3facba5e47aafd6e57eff00f6c95df69f556aa291bf3ae3353bddf7248e68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7bd92d746ff0a164ba0ae75aab874cfa59d97353b2d18a887ac33ce955d553d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://149b21dc5d1ffb371f66f66874ca6657cddcbb0221a690444462ed7fcb1bf167\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e363389e720c7b6f6d6e8e2a03f71bdc06fac5307f6fb23d64f0f401acb80a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:12:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xp8r6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5vcbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.188167 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l5dvh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a73d7427-d84d-469a-8a34-e32bcd26e1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1967110a9abd6c26fbe66210aa3cae7533e32db257b5621187496d2472e38aad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7x4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l5dvh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.199756 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bcf749c-5a91-4939-9805-775678104b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b36fac8a1f988356f7dd50571796786056b99218bdc8efc0f2e6ff8a0f06bbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://556e89f637026c73c38bb05df5d8eddc00915764822109fc3a3e46a162325b8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrnsd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:12:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9ml72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.208101 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83196244-71fa-4003-aa05-0f1a7de9db9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c475de8d49f5aefa32c82d036020b47bc55061e42d5da99bb1052ef7f0ca0b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ba82362069800a90f80ecd6105cc7b52d2d4a1648007bd4c920595a4fb6a493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.218222 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"30c92552-e288-4c4c-9b23-814ef6db17cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d21dcb816edf7dc1c0794bb9fa3f5cc68e91d12487fb7bee876c50cdaaf51b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a40add3a07da1f89729c0342c125961888e099be4ff2b145594879b8a90564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1feed383461f2eaac79a59cd392c39ebd88cf9d57d2662c89a77beff0b2958f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.229634 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e39c75d0addab689f14044bc3f4c74b7e5fb44beb239c595e230aba45cde4f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.233990 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234046 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234060 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.234070 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.242450 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8d9b7c5fc634d4fd6d6a5491b4d6368a857ead0454d1a6ad30b3831521e051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070877d4bcbe12af0c33ea335240c8c2f74fe2ad1fadc5fb95c18a3d7ed267c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.254474 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.275639 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354152fc-b1da-4633-9c3e-b920ca230df4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b219cc14550cacf7932c107f3508a1eaab4965e1437340cafd4869425afa3a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e01a57b87b36cbc72aa663ddf727dc57832ef92e9b7f734652e5d3bef92da15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05eb7e0b8908f8ba008baa9571736863b42f041972a4d53d515e949cea1a7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://221f4e7f0fa8e5401dc5e921bd29aa966d1cd0f0430c6332a6e6f799f54ba807\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11fbaa06f7122ed2b7fd693bc0b998662c280779fff6272c0eee273a5404f992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcc5746492f9ef329e368122114e60b7c63d0aa2a724a5019eefa769cfda2bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20153ddaa08dee4d8064bd10a0a59d22e5f176e431a6c88d6008cb59bad83a53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4d4837067ca84e58ac5358b222b189b204f24e81a3214463a41dbe069d4bb31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.288439 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4abcf3a-650b-4d07-81dd-b26137b8a2f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f86d874d07369ae1ebd43c06a67b0442fef5d5864fbcf1bc5e48ea4edbe93358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75310a42c37bb0d2cba55170ef03a7901763052674cf46e89bce30c431f9a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70f36da520f8752710ebd73c0220c2775c7305de269a3d4b5da59fc939370187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7b18956c981b45c23a82710243a1a8f61ca80287d54c485684a8c3d84b2a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.301558 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.317452 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41dab1387c4059b3311dbabc4da38848380d823eed1ddbe64e7130af3a6afabc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.332657 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1bd910-b683-42bf-966f-51a04ac18bd2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56fa79274209f3a41d8c53563e46723a826e8a086dc8be190e0b0619b638858\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmq8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m4gnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337299 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337528 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.337544 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.354139 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bnkpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c5bace6-b520-4c9e-be10-a66fea4f9130\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:12:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:45Z\\\",\\\"message\\\":\\\"2026-01-30T10:11:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f\\\\n2026-01-30T10:11:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d574152c-b1bd-4dd2-9743-71e7348b415f to /host/opt/cni/bin/\\\\n2026-01-30T10:12:00Z [verbose] multus-daemon started\\\\n2026-01-30T10:12:00Z [verbose] Readiness Indicator file check\\\\n2026-01-30T10:12:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbclc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bnkpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.382404 4984 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"000a8c9a-5211-4997-8b97-d37e227c899a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T10:12:54Z\\\",\\\"message\\\":\\\"090 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-6tdgl in node crc\\\\nI0130 10:12:54.120214 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-6tdgl after 0 failed attempt(s)\\\\nI0130 10:12:54.120223 7090 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-6tdgl\\\\nI0130 10:12:54.120214 7090 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 10:12:54.120261 7090 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120291 7090 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-bnkpj\\\\nI0130 10:12:54.120309 7090 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-bnkpj in node crc\\\\nI0130 10:12:54.120330 7090 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-bnkpj after 0 failed attempt(s)\\\\nF0130 10:12:54.120332 7090 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T10:12:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T10:12:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T10:11:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T10:11:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T10:11:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xrm2v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T10:13:26Z is after 2025-08-24T17:21:41Z" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440609 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440672 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.440734 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544190 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544313 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544341 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.544359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647729 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647801 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647827 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647857 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.647882 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.750472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.750952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.751125 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.751320 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.751528 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854369 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854416 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854432 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854455 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.854472 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957435 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957492 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:26 crc kubenswrapper[4984]: I0130 10:13:26.957510 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:26Z","lastTransitionTime":"2026-01-30T10:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060049 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060110 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060132 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060180 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.060205 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.089749 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.089774 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.089859 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.089757 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.090021 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.090109 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.090236 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:27 crc kubenswrapper[4984]: E0130 10:13:27.090296 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.151702 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:10:56.279908438 +0000 UTC Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163293 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163349 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163370 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163394 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.163413 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266457 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266502 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266518 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266542 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.266558 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369834 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369867 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369897 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.369919 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473514 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473791 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.473916 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577744 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577774 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.577796 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680308 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680322 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.680332 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784084 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784165 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.784208 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.887653 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.887902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.888033 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.888133 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.888219 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990575 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990611 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990622 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990637 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:27 crc kubenswrapper[4984]: I0130 10:13:27.990649 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:27Z","lastTransitionTime":"2026-01-30T10:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093759 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093819 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093837 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.093878 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.152535 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:46:03.082569602 +0000 UTC Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.196904 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197206 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197350 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197461 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.197563 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300649 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300689 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300719 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300736 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.300747 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.402851 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.402948 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.402973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.403010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.403035 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505155 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505211 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505233 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.505241 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607920 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607971 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607981 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.607993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.608002 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.710925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.710974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.710986 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.711003 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.711015 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.813869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.813972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.813990 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.814012 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.814029 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916861 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916926 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.916981 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:28 crc kubenswrapper[4984]: I0130 10:13:28.917005 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:28Z","lastTransitionTime":"2026-01-30T10:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020454 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020521 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.020537 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.089781 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.089775 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.089806 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.090104 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.090144 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.090217 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.089965 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:29 crc kubenswrapper[4984]: E0130 10:13:29.090322 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122713 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122755 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122767 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122781 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.122792 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.153143 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:03:44.187793843 +0000 UTC Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225668 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225740 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.225780 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329014 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329071 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329085 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.329094 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432304 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432321 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432343 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.432359 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535858 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535935 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535960 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.535992 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.536012 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638406 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638439 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638473 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.638494 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740655 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740691 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740701 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.740727 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843356 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843367 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843382 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.843393 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946395 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946458 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946477 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946500 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:29 crc kubenswrapper[4984]: I0130 10:13:29.946516 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:29Z","lastTransitionTime":"2026-01-30T10:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049298 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049360 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049378 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049402 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.049419 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.090997 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:30 crc kubenswrapper[4984]: E0130 10:13:30.091344 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xrm2v_openshift-ovn-kubernetes(000a8c9a-5211-4997-8b97-d37e227c899a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151700 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151762 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151776 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151799 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.151814 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.153983 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:09:02.272280635 +0000 UTC Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.254954 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.254998 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.255010 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.255030 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.255048 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.363554 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.363745 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.364581 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.364722 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.364822 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469230 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469312 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.469340 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573170 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573191 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573219 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.573243 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.676973 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677099 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677140 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677173 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.677193 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780798 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780842 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780856 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.780892 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884471 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884529 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884544 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884562 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.884575 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.986993 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987034 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987048 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987062 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:30 crc kubenswrapper[4984]: I0130 10:13:30.987071 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:30Z","lastTransitionTime":"2026-01-30T10:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089297 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089355 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089374 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089592 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089632 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089624 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089647 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089698 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089631 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.089715 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089739 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089798 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.089889 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.154826 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:40:45.750918759 +0000 UTC Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192600 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192658 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192673 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192693 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.192705 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296124 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296184 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296201 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296226 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.296281 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.398974 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399008 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399017 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399029 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.399039 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502235 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502613 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502730 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502875 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.502985 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.606732 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607088 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607297 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607409 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.607503 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.698391 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.699630 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/0.log" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.699875 4984 generic.go:334] "Generic (PLEG): container finished" podID="0c5bace6-b520-4c9e-be10-a66fea4f9130" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" exitCode=1 Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.699986 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerDied","Data":"d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.700298 4984 scope.go:117] "RemoveContainer" containerID="435d26787ea02d71b8737d9d2640706086c501c84df1d6b51958bfd35e92f24e" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.701044 4984 scope.go:117] "RemoveContainer" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" Jan 30 10:13:31 crc kubenswrapper[4984]: E0130 10:13:31.701446 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bnkpj_openshift-multus(0c5bace6-b520-4c9e-be10-a66fea4f9130)\"" pod="openshift-multus/multus-bnkpj" podUID="0c5bace6-b520-4c9e-be10-a66fea4f9130" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711891 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711937 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711952 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711970 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.711983 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.754089 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.75406703 podStartE2EDuration="21.75406703s" podCreationTimestamp="2026-01-30 10:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.730162965 +0000 UTC m=+116.296466799" watchObservedRunningTime="2026-01-30 10:13:31.75406703 +0000 UTC m=+116.320370854" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.776717 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.776694409 podStartE2EDuration="1m29.776694409s" podCreationTimestamp="2026-01-30 10:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.755038888 +0000 UTC m=+116.321342812" watchObservedRunningTime="2026-01-30 10:13:31.776694409 +0000 UTC m=+116.342998253" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.815482 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.815896 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.816089 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.816307 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.816449 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.878950 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=95.878935211 podStartE2EDuration="1m35.878935211s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.874309928 +0000 UTC m=+116.440613792" watchObservedRunningTime="2026-01-30 10:13:31.878935211 +0000 UTC m=+116.445239035" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.888880 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.888855485 podStartE2EDuration="1m2.888855485s" podCreationTimestamp="2026-01-30 10:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.888105464 +0000 UTC m=+116.454409328" watchObservedRunningTime="2026-01-30 10:13:31.888855485 +0000 UTC m=+116.455159349" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918578 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918595 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.918608 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:31Z","lastTransitionTime":"2026-01-30T10:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.941633 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podStartSLOduration=95.941612648 podStartE2EDuration="1m35.941612648s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.94098517 +0000 UTC m=+116.507289024" watchObservedRunningTime="2026-01-30 10:13:31.941612648 +0000 UTC m=+116.507916472" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.967071 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9ml72" podStartSLOduration=94.967046417 podStartE2EDuration="1m34.967046417s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.966881943 +0000 UTC m=+116.533185807" watchObservedRunningTime="2026-01-30 10:13:31.967046417 +0000 UTC m=+116.533350271" Jan 30 10:13:31 crc kubenswrapper[4984]: I0130 10:13:31.987006 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=94.986988669 podStartE2EDuration="1m34.986988669s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:31.985588679 +0000 UTC m=+116.551892513" watchObservedRunningTime="2026-01-30 10:13:31.986988669 +0000 UTC m=+116.553292503" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.014883 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6tdgl" podStartSLOduration=96.014864338 podStartE2EDuration="1m36.014864338s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:32.014747375 +0000 UTC m=+116.581051199" watchObservedRunningTime="2026-01-30 10:13:32.014864338 +0000 UTC m=+116.581168172" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020869 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020902 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020911 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020925 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.020944 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.042495 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5vcbf" podStartSLOduration=96.04247937 podStartE2EDuration="1m36.04247937s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:32.041706028 +0000 UTC m=+116.608009852" watchObservedRunningTime="2026-01-30 10:13:32.04247937 +0000 UTC m=+116.608783194" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.057659 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l5dvh" podStartSLOduration=95.057642025 podStartE2EDuration="1m35.057642025s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:32.0567627 +0000 UTC m=+116.623066524" watchObservedRunningTime="2026-01-30 10:13:32.057642025 +0000 UTC m=+116.623945839" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123536 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123565 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123574 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123586 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.123595 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.155514 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:58:48.170315447 +0000 UTC Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.225850 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226207 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226331 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.226567 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328780 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328788 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328805 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.328816 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431593 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431656 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431678 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431707 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.431730 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534642 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534721 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534738 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534763 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.534779 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637815 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637874 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637892 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637915 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.637933 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.706420 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.740972 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741028 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741039 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741057 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.741068 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844526 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844597 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844616 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844643 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.844660 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947294 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947358 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947375 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947397 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:32 crc kubenswrapper[4984]: I0130 10:13:32.947414 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:32Z","lastTransitionTime":"2026-01-30T10:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050053 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050097 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050109 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050127 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.050138 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.089608 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.089613 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.089701 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.089785 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.089900 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.090109 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.090201 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:33 crc kubenswrapper[4984]: E0130 10:13:33.090379 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154306 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154415 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154436 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154464 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.154486 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.156310 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:15:01.089259363 +0000 UTC Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258317 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258421 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258443 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.258912 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.259195 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361543 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361608 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361626 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.361668 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464509 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464617 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464648 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.464668 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.567909 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.567965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.567995 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.568015 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.568028 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671651 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671720 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671739 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671800 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.671831 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.774968 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775114 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775139 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775169 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.775189 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878557 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878694 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878717 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878749 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.878775 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981773 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981824 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981840 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.981862 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:33 crc kubenswrapper[4984]: I0130 10:13:33.982294 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:33Z","lastTransitionTime":"2026-01-30T10:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.086580 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.086659 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.086680 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.087193 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.087239 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.156866 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:40:35.435227464 +0000 UTC Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190497 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190551 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190591 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.190646 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294143 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294204 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294221 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294242 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.294281 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.396965 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397035 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397055 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397080 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.397097 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500519 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500582 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500600 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500623 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.500641 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603765 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603854 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603876 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603903 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.603924 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.706928 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707004 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707027 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707056 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.707077 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811075 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811151 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811172 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811199 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.811217 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914177 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914287 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914326 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914356 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:34 crc kubenswrapper[4984]: I0130 10:13:34.914377 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:34Z","lastTransitionTime":"2026-01-30T10:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018337 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018429 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018448 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018472 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.018489 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:35Z","lastTransitionTime":"2026-01-30T10:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088734 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088775 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088782 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088797 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.088806 4984 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T10:13:35Z","lastTransitionTime":"2026-01-30T10:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089670 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089759 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089756 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.089753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.089866 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.089945 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.090090 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:35 crc kubenswrapper[4984]: E0130 10:13:35.090199 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.145998 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf"] Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.146349 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.148597 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.149126 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.150387 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.150972 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.157341 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:48:27.765167299 +0000 UTC Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.157386 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.175895 4984 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.301850 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302058 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dda86ca6-bc08-4931-b254-fdcb9483081e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302433 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dda86ca6-bc08-4931-b254-fdcb9483081e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302521 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dda86ca6-bc08-4931-b254-fdcb9483081e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.302573 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403600 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dda86ca6-bc08-4931-b254-fdcb9483081e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dda86ca6-bc08-4931-b254-fdcb9483081e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403684 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403735 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403769 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dda86ca6-bc08-4931-b254-fdcb9483081e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.403925 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.404369 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dda86ca6-bc08-4931-b254-fdcb9483081e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.404672 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dda86ca6-bc08-4931-b254-fdcb9483081e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.413360 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dda86ca6-bc08-4931-b254-fdcb9483081e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.426145 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dda86ca6-bc08-4931-b254-fdcb9483081e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wrqkf\" (UID: \"dda86ca6-bc08-4931-b254-fdcb9483081e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.475471 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.717970 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" event={"ID":"dda86ca6-bc08-4931-b254-fdcb9483081e","Type":"ContainerStarted","Data":"9acb19d12c8f42e6ff263498c55bc7f980a6c9a454367aa8f8009c9fea132ed4"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.718013 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" event={"ID":"dda86ca6-bc08-4931-b254-fdcb9483081e","Type":"ContainerStarted","Data":"9be3a224d7bc76cee8c1d6659423194c6d53348bf17c8698f3f8df0e0f7ca8e0"} Jan 30 10:13:35 crc kubenswrapper[4984]: I0130 10:13:35.732055 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wrqkf" podStartSLOduration=98.731988673 podStartE2EDuration="1m38.731988673s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:35.73084038 +0000 UTC m=+120.297144214" watchObservedRunningTime="2026-01-30 10:13:35.731988673 +0000 UTC m=+120.298292527" Jan 30 10:13:36 crc kubenswrapper[4984]: E0130 10:13:36.045770 4984 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 10:13:36 crc kubenswrapper[4984]: E0130 10:13:36.182587 4984 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089835 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089872 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089895 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:37 crc kubenswrapper[4984]: I0130 10:13:37.089941 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090018 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090111 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090311 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:37 crc kubenswrapper[4984]: E0130 10:13:37.090472 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089723 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089758 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.089849 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:39 crc kubenswrapper[4984]: I0130 10:13:39.089739 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.089970 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.090318 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:39 crc kubenswrapper[4984]: E0130 10:13:39.090915 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089581 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089660 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.089798 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.089923 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.090041 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:41 crc kubenswrapper[4984]: I0130 10:13:41.089753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.090162 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:41 crc kubenswrapper[4984]: E0130 10:13:41.183840 4984 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090204 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090226 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090696 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090366 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:43 crc kubenswrapper[4984]: I0130 10:13:43.090324 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090791 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090753 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:43 crc kubenswrapper[4984]: E0130 10:13:43.090989 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.090182 4984 scope.go:117] "RemoveContainer" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.750877 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.752042 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac"} Jan 30 10:13:44 crc kubenswrapper[4984]: I0130 10:13:44.778344 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bnkpj" podStartSLOduration=108.778321736 podStartE2EDuration="1m48.778321736s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:44.776646688 +0000 UTC m=+129.342950512" watchObservedRunningTime="2026-01-30 10:13:44.778321736 +0000 UTC m=+129.344625560" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.089765 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.089778 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.089960 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.089795 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.090107 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.090367 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.090719 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.090880 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.091235 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.756693 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.758969 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerStarted","Data":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.759625 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.794658 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podStartSLOduration=109.794641128 podStartE2EDuration="1m49.794641128s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:13:45.792789385 +0000 UTC m=+130.359093269" watchObservedRunningTime="2026-01-30 10:13:45.794641128 +0000 UTC m=+130.360944952" Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.970599 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sdmkd"] Jan 30 10:13:45 crc kubenswrapper[4984]: I0130 10:13:45.970726 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:45 crc kubenswrapper[4984]: E0130 10:13:45.970856 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:46 crc kubenswrapper[4984]: E0130 10:13:46.185328 4984 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 10:13:47 crc kubenswrapper[4984]: I0130 10:13:47.090037 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:47 crc kubenswrapper[4984]: I0130 10:13:47.090098 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:47 crc kubenswrapper[4984]: E0130 10:13:47.090215 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:47 crc kubenswrapper[4984]: E0130 10:13:47.090409 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:47 crc kubenswrapper[4984]: I0130 10:13:47.090990 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:47 crc kubenswrapper[4984]: E0130 10:13:47.091201 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:48 crc kubenswrapper[4984]: I0130 10:13:48.089590 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:48 crc kubenswrapper[4984]: E0130 10:13:48.089838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:49 crc kubenswrapper[4984]: I0130 10:13:49.089609 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:49 crc kubenswrapper[4984]: I0130 10:13:49.089669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:49 crc kubenswrapper[4984]: I0130 10:13:49.089778 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:49 crc kubenswrapper[4984]: E0130 10:13:49.089938 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:49 crc kubenswrapper[4984]: E0130 10:13:49.090078 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:49 crc kubenswrapper[4984]: E0130 10:13:49.090376 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:50 crc kubenswrapper[4984]: I0130 10:13:50.089092 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:50 crc kubenswrapper[4984]: E0130 10:13:50.089269 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sdmkd" podUID="cec0ee98-d570-417f-a2fb-7ac19e3b25c0" Jan 30 10:13:51 crc kubenswrapper[4984]: I0130 10:13:51.089186 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:51 crc kubenswrapper[4984]: I0130 10:13:51.089235 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:51 crc kubenswrapper[4984]: I0130 10:13:51.089195 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:51 crc kubenswrapper[4984]: E0130 10:13:51.089512 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 10:13:51 crc kubenswrapper[4984]: E0130 10:13:51.089411 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 10:13:51 crc kubenswrapper[4984]: E0130 10:13:51.089691 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 10:13:52 crc kubenswrapper[4984]: I0130 10:13:52.089556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:13:52 crc kubenswrapper[4984]: I0130 10:13:52.092741 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 10:13:52 crc kubenswrapper[4984]: I0130 10:13:52.093535 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.089464 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.089568 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.089568 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.093859 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.094121 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.098722 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 10:13:53 crc kubenswrapper[4984]: I0130 10:13:53.103146 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.505163 4984 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.553040 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.553379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.555550 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzff9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.555898 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.560490 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9k4d"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.560903 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.561979 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.562156 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.567612 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.567686 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568038 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568231 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568544 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568947 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: W0130 10:13:55.569040 4984 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 30 10:13:55 crc kubenswrapper[4984]: E0130 10:13:55.569064 4984 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.568957 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b8xqj"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569395 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569448 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569573 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569778 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.569997 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570106 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570119 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570206 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570266 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.570366 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.574761 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.576553 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.578856 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.579462 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.586240 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587165 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587336 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587789 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587943 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.587992 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z7s9j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.591930 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.595864 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.596120 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.596318 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.603950 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.604208 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.604639 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtldg"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.605116 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.605172 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.622452 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.622642 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.622898 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.623130 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.623571 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.624333 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.624903 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.624967 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625126 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625325 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625479 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625507 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-47sww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.625964 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.626305 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.626569 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.627180 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.627749 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633075 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633449 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633664 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633750 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsbnt\" (UniqueName: \"kubernetes.io/projected/218f0398-9175-448b-83b8-6445e2c3df37-kube-api-access-dsbnt\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633804 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c7a47a-7861-4e43-b3f8-a187fc65f041-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.633959 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634082 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634236 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634086 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634239 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634089 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634578 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634712 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634821 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634907 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634954 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptt6\" (UniqueName: \"kubernetes.io/projected/a5c7a47a-7861-4e43-b3f8-a187fc65f041-kube-api-access-gptt6\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.634992 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608dec52-033b-4c24-9fbf-8fefe81621a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635026 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635060 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689169b7-2cad-4763-9b8d-fdb50126ec69-metrics-tls\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635088 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5t8z\" (UniqueName: \"kubernetes.io/projected/608dec52-033b-4c24-9fbf-8fefe81621a9-kube-api-access-w5t8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635179 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-config\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635213 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635251 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-image-import-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635306 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635345 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63805acf-f9ac-4417-824f-6640f8836b3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635385 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fe4b95-41f9-432d-b597-3941f219b7af-serving-cert\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635418 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-trusted-ca\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635452 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-images\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635526 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shl7r\" (UniqueName: \"kubernetes.io/projected/689169b7-2cad-4763-9b8d-fdb50126ec69-kube-api-access-shl7r\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635563 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-serving-cert\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635596 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24k4\" (UniqueName: \"kubernetes.io/projected/01fe4b95-41f9-432d-b597-3941f219b7af-kube-api-access-r24k4\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635627 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-service-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635720 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-node-pullsecrets\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c7a47a-7861-4e43-b3f8-a187fc65f041-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635801 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-encryption-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635837 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-config\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635885 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218f0398-9175-448b-83b8-6445e2c3df37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635917 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vzb\" (UniqueName: \"kubernetes.io/projected/3cb637fe-7a94-4790-abf9-3beb38ecb8da-kube-api-access-x9vzb\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635948 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vtt7\" (UniqueName: \"kubernetes.io/projected/63805acf-f9ac-4417-824f-6640f8836b3a-kube-api-access-8vtt7\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.635982 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636014 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtbf\" (UniqueName: \"kubernetes.io/projected/5d031ce5-81d8-4a93-8ef6-a97a86e06195-kube-api-access-vmtbf\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636048 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-config\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit-dir\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636113 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63805acf-f9ac-4417-824f-6640f8836b3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636152 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608dec52-033b-4c24-9fbf-8fefe81621a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636207 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d031ce5-81d8-4a93-8ef6-a97a86e06195-serving-cert\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636257 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636590 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636824 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.636997 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.638368 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.638915 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.639322 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.639674 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.640595 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.640721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641395 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641581 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641772 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.641991 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.644401 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.644686 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.644921 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645518 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645553 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645532 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.645776 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.646011 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.646567 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.646949 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.648766 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649298 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649397 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649557 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649625 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649665 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649609 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649802 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.649888 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.651419 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.652182 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.653447 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.653992 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.654127 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.654269 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.656372 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.657948 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.658911 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.658929 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.670322 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.672522 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.672921 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.674235 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.677610 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.678376 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.678842 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.679361 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.679817 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.685880 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.686491 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.687029 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.687694 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689139 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.691353 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.691522 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.689592 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.692152 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jc8ph"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.692231 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.694797 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.694856 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.698700 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-b5gpb"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.698998 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700147 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700554 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700747 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700831 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700948 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700974 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.700987 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701096 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701137 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701228 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.701299 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.702885 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.703751 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.704363 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.704825 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.708736 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.708548 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.708600 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.709598 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710177 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710695 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zl47s"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710811 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.710934 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.711310 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.711613 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.711641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.712120 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.712709 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.713094 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j6cv2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.713681 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714081 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714269 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714628 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.714776 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.715156 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b8xqj"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.716295 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.716940 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.717519 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.718448 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.720127 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.720160 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9k4d"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.724711 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.724762 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtldg"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.724772 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.725421 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.732747 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.732799 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-k9xrn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.733514 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.736371 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-tnwfs"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.738370 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.737315 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.740310 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-47sww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.740734 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.741982 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742018 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/19fa971c-228f-4457-81be-b2d9220ce27f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742041 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7l5h\" (UniqueName: \"kubernetes.io/projected/19fa971c-228f-4457-81be-b2d9220ce27f-kube-api-access-d7l5h\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742079 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742098 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsbnt\" (UniqueName: \"kubernetes.io/projected/218f0398-9175-448b-83b8-6445e2c3df37-kube-api-access-dsbnt\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742117 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/477b0c18-df7c-46c8-bae3-d0dda1af580c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742147 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-config\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.742180 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.750375 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751076 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d267eea-0fb6-4471-89b8-0de23f0a5873-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751159 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gptt6\" (UniqueName: \"kubernetes.io/projected/a5c7a47a-7861-4e43-b3f8-a187fc65f041-kube-api-access-gptt6\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751183 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-cabundle\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751206 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751226 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751251 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689169b7-2cad-4763-9b8d-fdb50126ec69-metrics-tls\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751314 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9e5765-1adb-417b-abbc-82c398a424a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751334 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751352 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751372 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d267eea-0fb6-4471-89b8-0de23f0a5873-proxy-tls\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751390 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh46g\" (UniqueName: \"kubernetes.io/projected/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-kube-api-access-hh46g\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751418 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2849d59-5121-45c3-bf3c-41c83a87827c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751437 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnt9\" (UniqueName: \"kubernetes.io/projected/8fb88289-55c4-4710-a8a2-293d430152db-kube-api-access-hpnt9\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751458 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751487 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751510 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fe4b95-41f9-432d-b597-3941f219b7af-serving-cert\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751516 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751532 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-profile-collector-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751551 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b635e15-1e86-4142-8e1d-c26628aa2403-serving-cert\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751572 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751591 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751613 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwnzn\" (UniqueName: \"kubernetes.io/projected/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-kube-api-access-lwnzn\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751631 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751653 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knlj\" (UniqueName: \"kubernetes.io/projected/53f7d13c-e0e5-47cd-b819-8ad8e6e1e761-kube-api-access-4knlj\") pod \"downloads-7954f5f757-jc8ph\" (UID: \"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761\") " pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751675 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-serving-cert\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751695 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751712 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751733 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04dd150e-af11-495b-a44b-10cce42da55b-service-ca-bundle\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751763 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-service-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751785 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751807 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751842 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbch\" (UniqueName: \"kubernetes.io/projected/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-kube-api-access-hrbch\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751865 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751864 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751883 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-node-pullsecrets\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751945 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.751985 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/477b0c18-df7c-46c8-bae3-d0dda1af580c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752002 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752024 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c7a47a-7861-4e43-b3f8-a187fc65f041-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-encryption-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752572 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752599 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752625 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752650 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752734 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218f0398-9175-448b-83b8-6445e2c3df37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752801 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vzb\" (UniqueName: \"kubernetes.io/projected/3cb637fe-7a94-4790-abf9-3beb38ecb8da-kube-api-access-x9vzb\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752820 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s66h\" (UniqueName: \"kubernetes.io/projected/bc2c9228-6181-419f-acdb-869007ac6f6c-kube-api-access-2s66h\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752845 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752866 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtbf\" (UniqueName: \"kubernetes.io/projected/5d031ce5-81d8-4a93-8ef6-a97a86e06195-kube-api-access-vmtbf\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.752890 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753003 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753032 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63805acf-f9ac-4417-824f-6640f8836b3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753053 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753077 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608dec52-033b-4c24-9fbf-8fefe81621a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753094 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d031ce5-81d8-4a93-8ef6-a97a86e06195-serving-cert\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753113 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-metrics-certs\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753101 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753136 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkfd\" (UniqueName: \"kubernetes.io/projected/74c9e5fc-e679-408d-ab8e-aab60ca942e9-kube-api-access-cdkfd\") pod \"migrator-59844c95c7-mbkzc\" (UID: \"74c9e5fc-e679-408d-ab8e-aab60ca942e9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753249 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbsm\" (UniqueName: \"kubernetes.io/projected/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-kube-api-access-tgbsm\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753294 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-default-certificate\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753371 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753395 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9cq\" (UniqueName: \"kubernetes.io/projected/04dd150e-af11-495b-a44b-10cce42da55b-kube-api-access-np9cq\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753463 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc71eba-e354-4963-967a-7e1c908467b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753524 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c7a47a-7861-4e43-b3f8-a187fc65f041-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753596 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753648 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753669 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-dir\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fa971c-228f-4457-81be-b2d9220ce27f-serving-cert\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.753772 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.755052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-service-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.755147 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-node-pullsecrets\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.755501 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5c7a47a-7861-4e43-b3f8-a187fc65f041-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.758012 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63805acf-f9ac-4417-824f-6640f8836b3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.758930 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759063 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-encryption-config\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.758938 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759199 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759832 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759912 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ksbx\" (UniqueName: \"kubernetes.io/projected/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-kube-api-access-7ksbx\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759947 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.759989 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608dec52-033b-4c24-9fbf-8fefe81621a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760073 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760099 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnft\" (UniqueName: \"kubernetes.io/projected/41fed1a2-7c34-4363-bad0-ac0740961cad-kube-api-access-vnnft\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760221 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-serving-cert\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760308 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760474 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608dec52-033b-4c24-9fbf-8fefe81621a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760504 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760554 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5t8z\" (UniqueName: \"kubernetes.io/projected/608dec52-033b-4c24-9fbf-8fefe81621a9-kube-api-access-w5t8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760671 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3294dd98-dfda-4f40-bdd8-ad0b8932432d-machine-approver-tls\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760720 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/689169b7-2cad-4763-9b8d-fdb50126ec69-metrics-tls\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760750 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-stats-auth\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760778 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-config\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.760990 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761072 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-image-import-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761130 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc71eba-e354-4963-967a-7e1c908467b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761185 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761237 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761601 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63805acf-f9ac-4417-824f-6640f8836b3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761661 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssc7c\" (UniqueName: \"kubernetes.io/projected/3294dd98-dfda-4f40-bdd8-ad0b8932432d-kube-api-access-ssc7c\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761678 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-config\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.761942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762096 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762165 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-images\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762230 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91c03f30-b334-480b-937d-15b6d0b493a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762351 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-trusted-ca\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762465 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-images\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762535 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-image-import-ca\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762696 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762823 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.762916 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-config\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763092 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shl7r\" (UniqueName: \"kubernetes.io/projected/689169b7-2cad-4763-9b8d-fdb50126ec69-kube-api-access-shl7r\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763191 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763201 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763400 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9e5765-1adb-417b-abbc-82c398a424a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763609 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763704 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608dec52-033b-4c24-9fbf-8fefe81621a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763928 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24k4\" (UniqueName: \"kubernetes.io/projected/01fe4b95-41f9-432d-b597-3941f219b7af-kube-api-access-r24k4\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.763983 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01fe4b95-41f9-432d-b597-3941f219b7af-serving-cert\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764168 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218f0398-9175-448b-83b8-6445e2c3df37-images\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764248 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764050 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56gn\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-kube-api-access-d56gn\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764326 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-tmpfs\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764361 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e5765-1adb-417b-abbc-82c398a424a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764417 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764455 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-serving-cert\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764516 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-service-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764561 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49ll7\" (UniqueName: \"kubernetes.io/projected/7b635e15-1e86-4142-8e1d-c26628aa2403-kube-api-access-49ll7\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764579 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvl9r\" (UniqueName: \"kubernetes.io/projected/91c03f30-b334-480b-937d-15b6d0b493a7-kube-api-access-nvl9r\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764613 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764792 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764817 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4h2p\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-kube-api-access-p4h2p\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764833 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764854 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764895 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-config\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764932 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b635e15-1e86-4142-8e1d-c26628aa2403-config\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764949 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfrd\" (UniqueName: \"kubernetes.io/projected/1d267eea-0fb6-4471-89b8-0de23f0a5873-kube-api-access-9mfrd\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.764970 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vtt7\" (UniqueName: \"kubernetes.io/projected/63805acf-f9ac-4417-824f-6640f8836b3a-kube-api-access-8vtt7\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765058 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhdv\" (UniqueName: \"kubernetes.io/projected/a2849d59-5121-45c3-bf3c-41c83a87827c-kube-api-access-gnhdv\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765185 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-srv-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765350 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-config\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb637fe-7a94-4790-abf9-3beb38ecb8da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765399 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit-dir\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765462 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765485 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-client\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765623 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d031ce5-81d8-4a93-8ef6-a97a86e06195-serving-cert\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765622 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d031ce5-81d8-4a93-8ef6-a97a86e06195-config\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765660 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cb637fe-7a94-4790-abf9-3beb38ecb8da-audit-dir\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765717 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-auth-proxy-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.765735 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.766029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-config\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.766489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c7a47a-7861-4e43-b3f8-a187fc65f041-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.766890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01fe4b95-41f9-432d-b597-3941f219b7af-trusted-ca\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.767032 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzff9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.770094 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63805acf-f9ac-4417-824f-6640f8836b3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.770599 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.770991 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218f0398-9175-448b-83b8-6445e2c3df37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.774195 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.775708 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.777167 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.777224 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z7s9j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.778160 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tnwfs"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.779187 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.780276 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zl47s"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.782137 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.783575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jc8ph"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.784228 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.786136 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.789795 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.792518 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.793544 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.794682 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j6cv2"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.795401 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.796802 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.798589 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.800167 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.801053 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.802451 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tm6bv"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.803204 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.803534 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.804529 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h9smt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.805529 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.806162 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.806754 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.808106 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tm6bv"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.812846 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h9smt"] Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.826557 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.835685 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.855518 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866426 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866470 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866496 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866518 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9e5765-1adb-417b-abbc-82c398a424a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866541 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866568 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866589 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d267eea-0fb6-4471-89b8-0de23f0a5873-proxy-tls\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866615 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh46g\" (UniqueName: \"kubernetes.io/projected/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-kube-api-access-hh46g\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2849d59-5121-45c3-bf3c-41c83a87827c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866692 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnt9\" (UniqueName: \"kubernetes.io/projected/8fb88289-55c4-4710-a8a2-293d430152db-kube-api-access-hpnt9\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866744 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866768 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866792 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866819 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwnzn\" (UniqueName: \"kubernetes.io/projected/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-kube-api-access-lwnzn\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866846 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-profile-collector-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866870 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b635e15-1e86-4142-8e1d-c26628aa2403-serving-cert\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866894 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866920 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knlj\" (UniqueName: \"kubernetes.io/projected/53f7d13c-e0e5-47cd-b819-8ad8e6e1e761-kube-api-access-4knlj\") pod \"downloads-7954f5f757-jc8ph\" (UID: \"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761\") " pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866968 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.866993 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04dd150e-af11-495b-a44b-10cce42da55b-service-ca-bundle\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867018 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867069 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867106 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbch\" (UniqueName: \"kubernetes.io/projected/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-kube-api-access-hrbch\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867131 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867180 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/477b0c18-df7c-46c8-bae3-d0dda1af580c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867263 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867305 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867374 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867436 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867448 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f9e5765-1adb-417b-abbc-82c398a424a2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.867491 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868439 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868480 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868488 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868513 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s66h\" (UniqueName: \"kubernetes.io/projected/bc2c9228-6181-419f-acdb-869007ac6f6c-kube-api-access-2s66h\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868559 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868596 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868645 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-metrics-certs\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868669 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdkfd\" (UniqueName: \"kubernetes.io/projected/74c9e5fc-e679-408d-ab8e-aab60ca942e9-kube-api-access-cdkfd\") pod \"migrator-59844c95c7-mbkzc\" (UID: \"74c9e5fc-e679-408d-ab8e-aab60ca942e9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868723 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbsm\" (UniqueName: \"kubernetes.io/projected/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-kube-api-access-tgbsm\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868748 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-default-certificate\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868808 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868835 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np9cq\" (UniqueName: \"kubernetes.io/projected/04dd150e-af11-495b-a44b-10cce42da55b-kube-api-access-np9cq\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868858 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc71eba-e354-4963-967a-7e1c908467b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868881 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868907 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-dir\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868942 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fa971c-228f-4457-81be-b2d9220ce27f-serving-cert\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868965 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.868989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869016 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ksbx\" (UniqueName: \"kubernetes.io/projected/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-kube-api-access-7ksbx\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869038 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869063 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869076 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-dir\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869085 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869109 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-serving-cert\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869140 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnft\" (UniqueName: \"kubernetes.io/projected/41fed1a2-7c34-4363-bad0-ac0740961cad-kube-api-access-vnnft\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869167 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869190 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869225 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3294dd98-dfda-4f40-bdd8-ad0b8932432d-machine-approver-tls\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869256 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-stats-auth\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869301 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc71eba-e354-4963-967a-7e1c908467b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869338 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssc7c\" (UniqueName: \"kubernetes.io/projected/3294dd98-dfda-4f40-bdd8-ad0b8932432d-kube-api-access-ssc7c\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/477b0c18-df7c-46c8-bae3-d0dda1af580c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869362 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.869390 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870209 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-images\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91c03f30-b334-480b-937d-15b6d0b493a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870256 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-config\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870303 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9e5765-1adb-417b-abbc-82c398a424a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870364 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56gn\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-kube-api-access-d56gn\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-tmpfs\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.870014 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-trusted-ca\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871076 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-tmpfs\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871095 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/91c03f30-b334-480b-937d-15b6d0b493a7-images\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871162 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871192 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871218 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-service-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871231 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871251 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e5765-1adb-417b-abbc-82c398a424a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871249 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b635e15-1e86-4142-8e1d-c26628aa2403-serving-cert\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871303 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49ll7\" (UniqueName: \"kubernetes.io/projected/7b635e15-1e86-4142-8e1d-c26628aa2403-kube-api-access-49ll7\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871359 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvl9r\" (UniqueName: \"kubernetes.io/projected/91c03f30-b334-480b-937d-15b6d0b493a7-kube-api-access-nvl9r\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871408 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871432 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4h2p\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-kube-api-access-p4h2p\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871451 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871409 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871470 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871487 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfrd\" (UniqueName: \"kubernetes.io/projected/1d267eea-0fb6-4471-89b8-0de23f0a5873-kube-api-access-9mfrd\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871508 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b635e15-1e86-4142-8e1d-c26628aa2403-config\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871532 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhdv\" (UniqueName: \"kubernetes.io/projected/a2849d59-5121-45c3-bf3c-41c83a87827c-kube-api-access-gnhdv\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871548 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-srv-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871566 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871570 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a2849d59-5121-45c3-bf3c-41c83a87827c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871584 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-client\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871602 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871620 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-auth-proxy-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871664 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/19fa971c-228f-4457-81be-b2d9220ce27f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871682 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871701 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7l5h\" (UniqueName: \"kubernetes.io/projected/19fa971c-228f-4457-81be-b2d9220ce27f-kube-api-access-d7l5h\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871735 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871755 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-config\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871782 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/477b0c18-df7c-46c8-bae3-d0dda1af580c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871825 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d267eea-0fb6-4471-89b8-0de23f0a5873-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871897 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.871921 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-cabundle\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872012 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-service-ca\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872201 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b635e15-1e86-4142-8e1d-c26628aa2403-config\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872787 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc2c9228-6181-419f-acdb-869007ac6f6c-config\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872828 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-serving-cert\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.872892 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.873379 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3294dd98-dfda-4f40-bdd8-ad0b8932432d-machine-approver-tls\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.873667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-profile-collector-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.873862 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/19fa971c-228f-4457-81be-b2d9220ce27f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874049 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f9e5765-1adb-417b-abbc-82c398a424a2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874058 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874155 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3294dd98-dfda-4f40-bdd8-ad0b8932432d-auth-proxy-config\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.874959 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d267eea-0fb6-4471-89b8-0de23f0a5873-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.875663 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bc2c9228-6181-419f-acdb-869007ac6f6c-etcd-client\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876043 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876090 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-metrics-tls\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876481 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/477b0c18-df7c-46c8-bae3-d0dda1af580c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.876898 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.877620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8fb88289-55c4-4710-a8a2-293d430152db-srv-cert\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.879540 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/91c03f30-b334-480b-937d-15b6d0b493a7-proxy-tls\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.896056 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.907039 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.916121 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.932022 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.935442 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.944806 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.961177 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.965053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.975772 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.980857 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:55 crc kubenswrapper[4984]: I0130 10:13:55.995891 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.005394 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.015828 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.017185 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.036298 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.038111 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.063669 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.073357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.075596 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.095399 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.102531 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fa971c-228f-4457-81be-b2d9220ce27f-serving-cert\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.115230 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.136106 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.156375 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.160820 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.175660 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.195220 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.225349 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.229506 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.236734 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.243809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.255628 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.261884 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.275186 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.281269 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.295670 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.303321 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.315746 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.336310 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.376007 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.396015 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.402323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-config\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.420362 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.436276 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.441312 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.456020 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.464635 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-default-certificate\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.476475 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.496097 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.504703 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-stats-auth\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.516430 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.523462 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/04dd150e-af11-495b-a44b-10cce42da55b-metrics-certs\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.536114 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.538935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04dd150e-af11-495b-a44b-10cce42da55b-service-ca-bundle\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.556299 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.576241 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.596659 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.604475 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.616901 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.635934 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.641715 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.655870 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.657756 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.676147 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.695615 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.714303 4984 request.go:700] Waited for 1.002682546s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.716370 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.736144 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.740292 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d267eea-0fb6-4471-89b8-0de23f0a5873-proxy-tls\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.757662 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.761561 4984 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.761663 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client podName:3cb637fe-7a94-4790-abf9-3beb38ecb8da nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.261635432 +0000 UTC m=+141.827939266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client") pod "apiserver-76f77b778f-fzff9" (UID: "3cb637fe-7a94-4790-abf9-3beb38ecb8da") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.763659 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-cabundle\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.775767 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.782645 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-webhook-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.783159 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.795259 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.816428 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.836154 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.856157 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.863618 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdc71eba-e354-4963-967a-7e1c908467b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867592 4984 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867734 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs podName:41fed1a2-7c34-4363-bad0-ac0740961cad nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.367716562 +0000 UTC m=+141.934020386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs") pod "multus-admission-controller-857f4d67dd-j6cv2" (UID: "41fed1a2-7c34-4363-bad0-ac0740961cad") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867597 4984 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867942 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert podName:50a9dda1-acf5-471f-a6cd-46e77a1dfa24 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.367929719 +0000 UTC m=+141.934233543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert") pod "catalog-operator-68c6474976-6kcss" (UID: "50a9dda1-acf5-471f-a6cd-46e77a1dfa24") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867632 4984 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868146 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume podName:fbdde9dd-69cf-405d-9143-1739e3acbdde nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368135456 +0000 UTC m=+141.934439280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume") pod "collect-profiles-29496120-p5sk8" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868208 4984 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868239 4984 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868321 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368301572 +0000 UTC m=+141.934605386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868358 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368335663 +0000 UTC m=+141.934639527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.867661 4984 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868207 4984 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868421 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368406935 +0000 UTC m=+141.934710779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868439 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368431496 +0000 UTC m=+141.934735420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868803 4984 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868928 4984 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868936 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca podName:b92a67bb-8407-4e47-9d9a-9d15398d90ed nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368901112 +0000 UTC m=+141.935204956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca") pod "marketplace-operator-79b997595-9lf7j" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.868959 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics podName:b92a67bb-8407-4e47-9d9a-9d15398d90ed nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.368952714 +0000 UTC m=+141.935256538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics") pod "marketplace-operator-79b997595-9lf7j" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.870003 4984 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.870085 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.370064472 +0000 UTC m=+141.936368336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871657 4984 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871698 4984 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871720 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key podName:a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.371700677 +0000 UTC m=+141.938004531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key") pod "service-ca-9c57cc56f-zl47s" (UID: "a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.871768 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config podName:fdc71eba-e354-4963-967a-7e1c908467b5 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.371753989 +0000 UTC m=+141.938057813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config") pod "kube-apiserver-operator-766d6c64bb-vqg9w" (UID: "fdc71eba-e354-4963-967a-7e1c908467b5") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872590 4984 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872713 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls podName:3c2dcd5a-96f0-48ff-a004-9764d24b66b1 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.372700461 +0000 UTC m=+141.939004345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-g5m7t" (UID: "3c2dcd5a-96f0-48ff-a004-9764d24b66b1") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872602 4984 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: E0130 10:13:56.872905 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle podName:549b3b6c-e68d-4da4-8780-643fdbf7e4c9 nodeName:}" failed. No retries permitted until 2026-01-30 10:13:57.372894318 +0000 UTC m=+141.939198212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle") pod "apiserver-7bbb656c7d-6hfpr" (UID: "549b3b6c-e68d-4da4-8780-643fdbf7e4c9") : failed to sync configmap cache: timed out waiting for the condition Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.875111 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.895174 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.916189 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.935520 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.955200 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.974882 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 10:13:56 crc kubenswrapper[4984]: I0130 10:13:56.995350 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.015870 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.035234 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.061718 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.075994 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.095627 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.116350 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.136541 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.156488 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.175518 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.196900 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.215917 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.236858 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.255869 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.277389 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.295792 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.301005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.316178 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.336367 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.357782 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.375844 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.402768 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403076 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403267 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403542 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.403988 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404150 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404408 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404618 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404797 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdc71eba-e354-4963-967a-7e1c908467b5-config\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.404989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405110 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405169 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405965 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-audit-policies\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.405230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406306 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406406 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.406840 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.407035 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.407180 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.413961 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.414647 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-encryption-config\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.414749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-etcd-client\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.415505 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-srv-cert\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417119 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417586 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417769 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41fed1a2-7c34-4363-bad0-ac0740961cad-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.417931 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-signing-key\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.420523 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-serving-cert\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.436482 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.455572 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.475791 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.495413 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.538515 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsbnt\" (UniqueName: \"kubernetes.io/projected/218f0398-9175-448b-83b8-6445e2c3df37-kube-api-access-dsbnt\") pod \"machine-api-operator-5694c8668f-b9k4d\" (UID: \"218f0398-9175-448b-83b8-6445e2c3df37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.554142 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gptt6\" (UniqueName: \"kubernetes.io/projected/a5c7a47a-7861-4e43-b3f8-a187fc65f041-kube-api-access-gptt6\") pod \"openshift-controller-manager-operator-756b6f6bc6-cw4qh\" (UID: \"a5c7a47a-7861-4e43-b3f8-a187fc65f041\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.579598 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vzb\" (UniqueName: \"kubernetes.io/projected/3cb637fe-7a94-4790-abf9-3beb38ecb8da-kube-api-access-x9vzb\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.599962 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmtbf\" (UniqueName: \"kubernetes.io/projected/5d031ce5-81d8-4a93-8ef6-a97a86e06195-kube-api-access-vmtbf\") pod \"authentication-operator-69f744f599-b8xqj\" (UID: \"5d031ce5-81d8-4a93-8ef6-a97a86e06195\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.613670 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5t8z\" (UniqueName: \"kubernetes.io/projected/608dec52-033b-4c24-9fbf-8fefe81621a9-kube-api-access-w5t8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-jj8f2\" (UID: \"608dec52-033b-4c24-9fbf-8fefe81621a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.630853 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"controller-manager-879f6c89f-5sdnz\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.633244 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.635477 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.678461 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shl7r\" (UniqueName: \"kubernetes.io/projected/689169b7-2cad-4763-9b8d-fdb50126ec69-kube-api-access-shl7r\") pod \"dns-operator-744455d44c-mtldg\" (UID: \"689169b7-2cad-4763-9b8d-fdb50126ec69\") " pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.698467 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vtt7\" (UniqueName: \"kubernetes.io/projected/63805acf-f9ac-4417-824f-6640f8836b3a-kube-api-access-8vtt7\") pod \"openshift-apiserver-operator-796bbdcf4f-p5hhc\" (UID: \"63805acf-f9ac-4417-824f-6640f8836b3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.705227 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.715574 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.717674 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24k4\" (UniqueName: \"kubernetes.io/projected/01fe4b95-41f9-432d-b597-3941f219b7af-kube-api-access-r24k4\") pod \"console-operator-58897d9998-z7s9j\" (UID: \"01fe4b95-41f9-432d-b597-3941f219b7af\") " pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.733849 4984 request.go:700] Waited for 1.930258294s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.735899 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.755965 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.758292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.775774 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.780570 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.794652 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.795989 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.816051 4984 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.820544 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.831960 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2"] Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.838756 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.854626 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:13:57 crc kubenswrapper[4984]: W0130 10:13:57.878808 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608dec52_033b_4c24_9fbf_8fefe81621a9.slice/crio-bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570 WatchSource:0}: Error finding container bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570: Status 404 returned error can't find the container with id bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570 Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.880154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh46g\" (UniqueName: \"kubernetes.io/projected/3c2dcd5a-96f0-48ff-a004-9764d24b66b1-kube-api-access-hh46g\") pod \"control-plane-machine-set-operator-78cbb6b69f-g5m7t\" (UID: \"3c2dcd5a-96f0-48ff-a004-9764d24b66b1\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.896260 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.897096 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.901778 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"route-controller-manager-6576b87f9c-v6xww\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.920539 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnt9\" (UniqueName: \"kubernetes.io/projected/8fb88289-55c4-4710-a8a2-293d430152db-kube-api-access-hpnt9\") pod \"olm-operator-6b444d44fb-k55nt\" (UID: \"8fb88289-55c4-4710-a8a2-293d430152db\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.931756 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knlj\" (UniqueName: \"kubernetes.io/projected/53f7d13c-e0e5-47cd-b819-8ad8e6e1e761-kube-api-access-4knlj\") pod \"downloads-7954f5f757-jc8ph\" (UID: \"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761\") " pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.939550 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.949716 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b9k4d"] Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.959148 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbch\" (UniqueName: \"kubernetes.io/projected/ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0-kube-api-access-hrbch\") pod \"packageserver-d55dfcdfc-wc7jt\" (UID: \"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.971105 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:57 crc kubenswrapper[4984]: I0130 10:13:57.989998 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwnzn\" (UniqueName: \"kubernetes.io/projected/549b3b6c-e68d-4da4-8780-643fdbf7e4c9-kube-api-access-lwnzn\") pod \"apiserver-7bbb656c7d-6hfpr\" (UID: \"549b3b6c-e68d-4da4-8780-643fdbf7e4c9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.013286 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"collect-profiles-29496120-p5sk8\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.034541 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s66h\" (UniqueName: \"kubernetes.io/projected/bc2c9228-6181-419f-acdb-869007ac6f6c-kube-api-access-2s66h\") pod \"etcd-operator-b45778765-47sww\" (UID: \"bc2c9228-6181-419f-acdb-869007ac6f6c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.048212 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.057624 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdkfd\" (UniqueName: \"kubernetes.io/projected/74c9e5fc-e679-408d-ab8e-aab60ca942e9-kube-api-access-cdkfd\") pod \"migrator-59844c95c7-mbkzc\" (UID: \"74c9e5fc-e679-408d-ab8e-aab60ca942e9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.070126 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.079344 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbsm\" (UniqueName: \"kubernetes.io/projected/50a9dda1-acf5-471f-a6cd-46e77a1dfa24-kube-api-access-tgbsm\") pod \"catalog-operator-68c6474976-6kcss\" (UID: \"50a9dda1-acf5-471f-a6cd-46e77a1dfa24\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.091991 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np9cq\" (UniqueName: \"kubernetes.io/projected/04dd150e-af11-495b-a44b-10cce42da55b-kube-api-access-np9cq\") pod \"router-default-5444994796-b5gpb\" (UID: \"04dd150e-af11-495b-a44b-10cce42da55b\") " pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.101797 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.114217 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdc71eba-e354-4963-967a-7e1c908467b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vqg9w\" (UID: \"fdc71eba-e354-4963-967a-7e1c908467b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.117784 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.127669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.133422 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"oauth-openshift-558db77b4-59vj6\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.143607 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.153537 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.156365 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.181196 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mtldg"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.181418 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ksbx\" (UniqueName: \"kubernetes.io/projected/a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10-kube-api-access-7ksbx\") pod \"service-ca-9c57cc56f-zl47s\" (UID: \"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10\") " pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.186684 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.193780 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnft\" (UniqueName: \"kubernetes.io/projected/41fed1a2-7c34-4363-bad0-ac0740961cad-kube-api-access-vnnft\") pod \"multus-admission-controller-857f4d67dd-j6cv2\" (UID: \"41fed1a2-7c34-4363-bad0-ac0740961cad\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.212495 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"marketplace-operator-79b997595-9lf7j\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.215442 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.215512 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb88289_55c4_4710_a8a2_293d430152db.slice/crio-46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c WatchSource:0}: Error finding container 46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c: Status 404 returned error can't find the container with id 46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.233396 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssc7c\" (UniqueName: \"kubernetes.io/projected/3294dd98-dfda-4f40-bdd8-ad0b8932432d-kube-api-access-ssc7c\") pod \"machine-approver-56656f9798-kgt7t\" (UID: \"3294dd98-dfda-4f40-bdd8-ad0b8932432d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.243703 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.248513 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.255867 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f9e5765-1adb-417b-abbc-82c398a424a2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sz5wt\" (UID: \"1f9e5765-1adb-417b-abbc-82c398a424a2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.258464 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63805acf_f9ac_4417_824f_6640f8836b3a.slice/crio-61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049 WatchSource:0}: Error finding container 61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049: Status 404 returned error can't find the container with id 61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049 Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.266224 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.272069 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56gn\" (UniqueName: \"kubernetes.io/projected/535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0-kube-api-access-d56gn\") pod \"ingress-operator-5b745b69d9-mgwkp\" (UID: \"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.276332 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jc8ph"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.300041 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.301602 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"console-f9d7485db-v2prt\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.302236 4984 secret.go:188] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.302304 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client podName:3cb637fe-7a94-4790-abf9-3beb38ecb8da nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.302284736 +0000 UTC m=+143.868588550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client") pod "apiserver-76f77b778f-fzff9" (UID: "3cb637fe-7a94-4790-abf9-3beb38ecb8da") : failed to sync secret cache: timed out waiting for the condition Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.331605 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.334496 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z7s9j"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.336177 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.336908 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49ll7\" (UniqueName: \"kubernetes.io/projected/7b635e15-1e86-4142-8e1d-c26628aa2403-kube-api-access-49ll7\") pod \"service-ca-operator-777779d784-gm2ht\" (UID: \"7b635e15-1e86-4142-8e1d-c26628aa2403\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.341368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.350078 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b8xqj"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.353538 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/362ed1a8-599d-44c5-bf2d-d9d7d69517e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gb94b\" (UID: \"362ed1a8-599d-44c5-bf2d-d9d7d69517e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.353857 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f7d13c_e0e5_47cd_b819_8ad8e6e1e761.slice/crio-848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af WatchSource:0}: Error finding container 848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af: Status 404 returned error can't find the container with id 848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.355880 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvl9r\" (UniqueName: \"kubernetes.io/projected/91c03f30-b334-480b-937d-15b6d0b493a7-kube-api-access-nvl9r\") pod \"machine-config-operator-74547568cd-kmfcq\" (UID: \"91c03f30-b334-480b-937d-15b6d0b493a7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.356841 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.363292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.372715 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.378532 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfrd\" (UniqueName: \"kubernetes.io/projected/1d267eea-0fb6-4471-89b8-0de23f0a5873-kube-api-access-9mfrd\") pod \"machine-config-controller-84d6567774-kc9w4\" (UID: \"1d267eea-0fb6-4471-89b8-0de23f0a5873\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.378808 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.388556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.392709 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.399249 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.403442 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4h2p\" (UniqueName: \"kubernetes.io/projected/477b0c18-df7c-46c8-bae3-d0dda1af580c-kube-api-access-p4h2p\") pod \"cluster-image-registry-operator-dc59b4c8b-fczwn\" (UID: \"477b0c18-df7c-46c8-bae3-d0dda1af580c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.408639 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.422618 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhdv\" (UniqueName: \"kubernetes.io/projected/a2849d59-5121-45c3-bf3c-41c83a87827c-kube-api-access-gnhdv\") pod \"cluster-samples-operator-665b6dd947-t8vjw\" (UID: \"a2849d59-5121-45c3-bf3c-41c83a87827c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.434624 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.441324 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7l5h\" (UniqueName: \"kubernetes.io/projected/19fa971c-228f-4457-81be-b2d9220ce27f-kube-api-access-d7l5h\") pod \"openshift-config-operator-7777fb866f-p6b4g\" (UID: \"19fa971c-228f-4457-81be-b2d9220ce27f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.450524 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d031ce5_81d8_4a93_8ef6_a97a86e06195.slice/crio-c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9 WatchSource:0}: Error finding container c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9: Status 404 returned error can't find the container with id c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9 Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.457581 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.460574 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.506108 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.506306 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.508215 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-47sww"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.522115 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.533977 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534044 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534316 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534347 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534408 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fd0694-7375-4f0f-8cf1-84af752803b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534436 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534486 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkk4\" (UniqueName: \"kubernetes.io/projected/b8fd0694-7375-4f0f-8cf1-84af752803b6-kube-api-access-lxkk4\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534521 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534541 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.534582 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.534867 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.034854888 +0000 UTC m=+143.601158762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.552610 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.559654 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.592110 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc"] Jan 30 10:13:58 crc kubenswrapper[4984]: W0130 10:13:58.601552 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2c9228_6181_419f_acdb_869007ac6f6c.slice/crio-08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063 WatchSource:0}: Error finding container 08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063: Status 404 returned error can't find the container with id 08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063 Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.617925 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.639160 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.639811 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640038 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640129 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-mountpoint-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640250 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640345 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292w5\" (UniqueName: \"kubernetes.io/projected/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-kube-api-access-292w5\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640395 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-registration-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640521 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fd0694-7375-4f0f-8cf1-84af752803b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.640547 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-config-volume\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641401 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641501 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641738 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9a01c47-eab2-4990-a659-a1f15a8176dd-cert\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.641875 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-csi-data-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkk4\" (UniqueName: \"kubernetes.io/projected/b8fd0694-7375-4f0f-8cf1-84af752803b6-kube-api-access-lxkk4\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642134 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642174 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.643484 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.143465054 +0000 UTC m=+143.709768958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.642196 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rbf\" (UniqueName: \"kubernetes.io/projected/e9a01c47-eab2-4990-a659-a1f15a8176dd-kube-api-access-c5rbf\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.645227 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.645440 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-certs\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.646118 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8dl\" (UniqueName: \"kubernetes.io/projected/48ae7d4f-38b1-40c0-ad61-815992265930-kube-api-access-ch8dl\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.646349 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-node-bootstrap-token\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.646424 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-metrics-tls\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.648679 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.14866666 +0000 UTC m=+143.714970484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652324 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlt5b\" (UniqueName: \"kubernetes.io/projected/5f7fb8a3-2517-48bf-9a10-82725a7391cb-kube-api-access-xlt5b\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-plugins-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652622 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-socket-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.652912 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.653336 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.657572 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.666454 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.668422 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fd0694-7375-4f0f-8cf1-84af752803b6-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.679808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.698994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.714555 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkk4\" (UniqueName: \"kubernetes.io/projected/b8fd0694-7375-4f0f-8cf1-84af752803b6-kube-api-access-lxkk4\") pod \"package-server-manager-789f6589d5-7n8b9\" (UID: \"b8fd0694-7375-4f0f-8cf1-84af752803b6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.722877 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.724759 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.732596 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.758642 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762255 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-config-volume\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762334 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9a01c47-eab2-4990-a659-a1f15a8176dd-cert\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762393 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-csi-data-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762471 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rbf\" (UniqueName: \"kubernetes.io/projected/e9a01c47-eab2-4990-a659-a1f15a8176dd-kube-api-access-c5rbf\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762634 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-certs\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762680 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8dl\" (UniqueName: \"kubernetes.io/projected/48ae7d4f-38b1-40c0-ad61-815992265930-kube-api-access-ch8dl\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762722 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-node-bootstrap-token\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762758 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-metrics-tls\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.762787 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlt5b\" (UniqueName: \"kubernetes.io/projected/5f7fb8a3-2517-48bf-9a10-82725a7391cb-kube-api-access-xlt5b\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763162 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-plugins-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-socket-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763252 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-mountpoint-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763348 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-registration-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.763379 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292w5\" (UniqueName: \"kubernetes.io/projected/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-kube-api-access-292w5\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785596 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-plugins-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785697 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-socket-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785757 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-mountpoint-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.785843 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-registration-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.786341 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.286321562 +0000 UTC m=+143.852625386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.788405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-certs\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.790578 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-metrics-tls\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.790811 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/48ae7d4f-38b1-40c0-ad61-815992265930-csi-data-dir\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.791326 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.798017 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-config-volume\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.800299 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e9a01c47-eab2-4990-a659-a1f15a8176dd-cert\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.802383 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8dl\" (UniqueName: \"kubernetes.io/projected/48ae7d4f-38b1-40c0-ad61-815992265930-kube-api-access-ch8dl\") pod \"csi-hostpathplugin-h9smt\" (UID: \"48ae7d4f-38b1-40c0-ad61-815992265930\") " pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.808311 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5f7fb8a3-2517-48bf-9a10-82725a7391cb-node-bootstrap-token\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.811881 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.824912 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlt5b\" (UniqueName: \"kubernetes.io/projected/5f7fb8a3-2517-48bf-9a10-82725a7391cb-kube-api-access-xlt5b\") pod \"machine-config-server-k9xrn\" (UID: \"5f7fb8a3-2517-48bf-9a10-82725a7391cb\") " pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.830020 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292w5\" (UniqueName: \"kubernetes.io/projected/87dbb26d-b1c4-4a8f-b2b6-64e39edadd68-kube-api-access-292w5\") pod \"dns-default-tnwfs\" (UID: \"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68\") " pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.850674 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" event={"ID":"8fb88289-55c4-4710-a8a2-293d430152db","Type":"ContainerStarted","Data":"4fca571edf5f9718dc5f0396111c559d14291ec33ffb072b810473ab7ddb77fa"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.850749 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" event={"ID":"8fb88289-55c4-4710-a8a2-293d430152db","Type":"ContainerStarted","Data":"46e0e0ba610aca51e6fd2e8b1229c2cdca45e118274f02854866d335f1d1db5c"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.852486 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.860124 4984 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k55nt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.860208 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" podUID="8fb88289-55c4-4710-a8a2-293d430152db" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.861133 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rbf\" (UniqueName: \"kubernetes.io/projected/e9a01c47-eab2-4990-a659-a1f15a8176dd-kube-api-access-c5rbf\") pod \"ingress-canary-tm6bv\" (UID: \"e9a01c47-eab2-4990-a659-a1f15a8176dd\") " pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.865347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.865727 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.365708846 +0000 UTC m=+143.932012670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.875104 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" event={"ID":"01fe4b95-41f9-432d-b597-3941f219b7af","Type":"ContainerStarted","Data":"a57a2c30691a6546a05303d30cd231a0294ddbcd0742bd77474e4dcebb493e1a"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.884301 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" event={"ID":"3294dd98-dfda-4f40-bdd8-ad0b8932432d","Type":"ContainerStarted","Data":"41abe91d6377d1be83ade8ce7a07b7a076924bbd33ef3cef1a1710510f6cb9b3"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.890371 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" event={"ID":"bc2c9228-6181-419f-acdb-869007ac6f6c","Type":"ContainerStarted","Data":"08a43da6bfd4c26b0a4b2351aaadeb6018ef827800a32c3c99211656eaed2063"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.892035 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jc8ph" event={"ID":"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761","Type":"ContainerStarted","Data":"848bcfe99f540e233dccc76ed1243ebcafe87c4fee1a88a087b0a905174f90af"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.894666 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.894700 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.894763 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.899108 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" event={"ID":"a5c7a47a-7861-4e43-b3f8-a187fc65f041","Type":"ContainerStarted","Data":"3bc8f0953d72545b60497ae94ad96ff16d89c4dcdc37a3b078ced73ce53e51bf"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.912232 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerStarted","Data":"5d2a7595aa7be4a2d24c3db3a03ceede193b8f38eb6567b569e38559c698d2a9"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.912527 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.915021 4984 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-v6xww container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.915070 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.919057 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt"] Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.934849 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerStarted","Data":"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.934905 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerStarted","Data":"ea7973a6b7aeb56d77b3657c44c45b40105b1dffec897b668fde3fd406ab2c03"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.936076 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.940221 4984 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5sdnz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.940636 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.948622 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" event={"ID":"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0","Type":"ContainerStarted","Data":"103528758bdcda2daf4733d5f787a5ab02533f96f8b25c345c3381791d91f8b3"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.958241 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" event={"ID":"50a9dda1-acf5-471f-a6cd-46e77a1dfa24","Type":"ContainerStarted","Data":"582d5ceddd62e3e6c034668bfc56e1c1dc891d35188ba5cb001d91b7ec29fc9a"} Jan 30 10:13:58 crc kubenswrapper[4984]: I0130 10:13:58.966560 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:58 crc kubenswrapper[4984]: E0130 10:13:58.967839 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.467820221 +0000 UTC m=+144.034124045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.006541 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" event={"ID":"5d031ce5-81d8-4a93-8ef6-a97a86e06195","Type":"ContainerStarted","Data":"c5c58fba3dc8a27999cdea93d737255746a9645e9fcc439a62b9a491bf63e7e9"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.009229 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" event={"ID":"74c9e5fc-e679-408d-ab8e-aab60ca942e9","Type":"ContainerStarted","Data":"cce9b5b312fa895c5861cec669487a87ae27962623bf7508ff3af357237da0cb"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.010291 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" event={"ID":"549b3b6c-e68d-4da4-8780-643fdbf7e4c9","Type":"ContainerStarted","Data":"c555d93237ec1b9fa5e2f685eab7146593879790d731765baa1d6201c0c59e26"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.040978 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" event={"ID":"218f0398-9175-448b-83b8-6445e2c3df37","Type":"ContainerStarted","Data":"56bb67d67f6487bc2ce5657ae521b69cbba4c9372c5640fa7fe5c05e84b7b5a3"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.041021 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" event={"ID":"218f0398-9175-448b-83b8-6445e2c3df37","Type":"ContainerStarted","Data":"b1612a29cce8d1a4b851623ccd75f2ac8d02fc765bd405a9e59595d860ac3506"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.041031 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" event={"ID":"218f0398-9175-448b-83b8-6445e2c3df37","Type":"ContainerStarted","Data":"01285bf4262939096b0619a50d238d6faaa4374bfa3ad0d25135d4d6f8d97ac2"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.043068 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" event={"ID":"608dec52-033b-4c24-9fbf-8fefe81621a9","Type":"ContainerStarted","Data":"60d289f9860aea5183462739badf3ac4bc95dad32afa678ddd17fcc98fbf4373"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.043120 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" event={"ID":"608dec52-033b-4c24-9fbf-8fefe81621a9","Type":"ContainerStarted","Data":"bb29d15b5cdf9d1b89a8e452f139f98d4745a5989db31badf550389319f31570"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.044604 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b5gpb" event={"ID":"04dd150e-af11-495b-a44b-10cce42da55b","Type":"ContainerStarted","Data":"acac6926e4a3d3ff4ffbe4df03513665fb9767d834e311a9014488a905873713"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.045823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" event={"ID":"689169b7-2cad-4763-9b8d-fdb50126ec69","Type":"ContainerStarted","Data":"effcc05a29f40eb75b75ff1c10712983508ca82a309d510f8d5ef218e29d87ff"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.045854 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" event={"ID":"689169b7-2cad-4763-9b8d-fdb50126ec69","Type":"ContainerStarted","Data":"43a3851ce8e433b6771be4e20f604dcbd631cb222f1c933e9ca2cfd6d8718185"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.046936 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" event={"ID":"63805acf-f9ac-4417-824f-6640f8836b3a","Type":"ContainerStarted","Data":"2978e9cd3ccd49cfb0d09269903ee35442921ddc50767ac10039e61dcae52d09"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.046962 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" event={"ID":"63805acf-f9ac-4417-824f-6640f8836b3a","Type":"ContainerStarted","Data":"61ac4c7af215aad67dbbb43683d3af25089af34bdac135a1617aa4d06d7af049"} Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.068435 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.072611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k9xrn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.072869 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.572853966 +0000 UTC m=+144.139157790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.091949 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tm6bv" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.092174 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:13:59 crc kubenswrapper[4984]: W0130 10:13:59.112837 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f9e5765_1adb_417b_abbc_82c398a424a2.slice/crio-9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c WatchSource:0}: Error finding container 9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c: Status 404 returned error can't find the container with id 9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.126341 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.168066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.169231 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.169506 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.669430323 +0000 UTC m=+144.235734147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.169838 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.171742 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.67170268 +0000 UTC m=+144.238006504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.186980 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.211498 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.221650 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.225932 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.240897 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.270679 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.273909 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zl47s"] Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.277056 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.774673255 +0000 UTC m=+144.340977079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.371797 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.372078 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.373262 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.873226139 +0000 UTC m=+144.439530023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.384462 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3cb637fe-7a94-4790-abf9-3beb38ecb8da-etcd-client\") pod \"apiserver-76f77b778f-fzff9\" (UID: \"3cb637fe-7a94-4790-abf9-3beb38ecb8da\") " pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.413327 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.472661 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.473116 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.973083198 +0000 UTC m=+144.539387022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.473620 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.474242 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:13:59.974230697 +0000 UTC m=+144.540534521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.527886 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j6cv2"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.540796 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.574345 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.576111 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.076090544 +0000 UTC m=+144.642394368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: W0130 10:13:59.634602 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdc71eba_e354_4963_967a_7e1c908467b5.slice/crio-0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa WatchSource:0}: Error finding container 0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa: Status 404 returned error can't find the container with id 0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.680743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.681668 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.181652166 +0000 UTC m=+144.747956000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.761796 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq"] Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.783243 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.783653 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.283635237 +0000 UTC m=+144.849939061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.886554 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.886884 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.38687228 +0000 UTC m=+144.953176104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:13:59 crc kubenswrapper[4984]: I0130 10:13:59.987885 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:13:59 crc kubenswrapper[4984]: E0130 10:13:59.988478 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.488463648 +0000 UTC m=+145.054767472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.066958 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.094203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.094637 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.594622181 +0000 UTC m=+145.160926005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.121660 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" event={"ID":"3294dd98-dfda-4f40-bdd8-ad0b8932432d","Type":"ContainerStarted","Data":"d1a034491ed6daf84dc17f5d63044c0801c115dbca562b886eea23ce262bddb3"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.127388 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k9xrn" event={"ID":"5f7fb8a3-2517-48bf-9a10-82725a7391cb","Type":"ContainerStarted","Data":"301aeac89d8e9bdfbe134b9ff92735088e4aeef66bc31303610e344c70b968d6"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.131576 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jc8ph" event={"ID":"53f7d13c-e0e5-47cd-b819-8ad8e6e1e761","Type":"ContainerStarted","Data":"0732e61d68b0c4329e59579400aab67845dd855d6c55216938ffbb04cc930430"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.132765 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.132850 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.199173 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.211887 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.7118687 +0000 UTC m=+145.278172524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.256030 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" event={"ID":"a5c7a47a-7861-4e43-b3f8-a187fc65f041","Type":"ContainerStarted","Data":"008bc110a4150a4f316807d409f4953605063d6dd29936f7f92d10d36fa4241d"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.286284 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" event={"ID":"1d267eea-0fb6-4471-89b8-0de23f0a5873","Type":"ContainerStarted","Data":"7bbbdb2cb42e46ca9f2a0f8a868a834178266e2fb4ab5f633e20ef5d36a3a307"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.297325 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" event={"ID":"1f9e5765-1adb-417b-abbc-82c398a424a2","Type":"ContainerStarted","Data":"9648c6027ced892d4c955f72eeba24b8e0fd3b4dcf8fe20f9c7ee1b7ff7ba83c"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.299186 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" event={"ID":"41fed1a2-7c34-4363-bad0-ac0740961cad","Type":"ContainerStarted","Data":"14455f633a9b5fe51d39c12a8d371f46a45d93dce5bde1dde335216387f3a7d2"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.307097 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.307499 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.807486123 +0000 UTC m=+145.373789937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.317350 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" event={"ID":"01fe4b95-41f9-432d-b597-3941f219b7af","Type":"ContainerStarted","Data":"82ec606e8193c56213694d2d41c5ef8bc902d092f97533a4e819524312ff6ccd"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.318159 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.319159 4984 patch_prober.go:28] interesting pod/console-operator-58897d9998-z7s9j container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.319238 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" podUID="01fe4b95-41f9-432d-b597-3941f219b7af" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.324488 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.336627 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" event={"ID":"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0","Type":"ContainerStarted","Data":"6b862628f7cb17aecedce100d4d5accab2716a2a3969709e0db85105dcaa966a"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.336983 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.350501 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerStarted","Data":"1d107edce64a981b016ac18f64e3952e99a1d1ef26bb18f85c1948ec49ead73c"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.354630 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-b9k4d" podStartSLOduration=123.354607143 podStartE2EDuration="2m3.354607143s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.305446964 +0000 UTC m=+144.871750778" watchObservedRunningTime="2026-01-30 10:14:00.354607143 +0000 UTC m=+144.920910957" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.358474 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tm6bv"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.359255 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p5hhc" podStartSLOduration=123.35924385 podStartE2EDuration="2m3.35924385s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.340615998 +0000 UTC m=+144.906919822" watchObservedRunningTime="2026-01-30 10:14:00.35924385 +0000 UTC m=+144.925547674" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.363703 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.369499 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.383853 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerStarted","Data":"22894fd3f7185098bfb82595039c231f2f5583d91c055ff95ffbf8f516afcd2e"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.395001 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" event={"ID":"bc2c9228-6181-419f-acdb-869007ac6f6c","Type":"ContainerStarted","Data":"007301544b29cc61478615627f8d6ffe84e0d4b61ecae9393a10bb6331168a85"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.399547 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jj8f2" podStartSLOduration=123.399525667 podStartE2EDuration="2m3.399525667s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.384153175 +0000 UTC m=+144.950456999" watchObservedRunningTime="2026-01-30 10:14:00.399525667 +0000 UTC m=+144.965829491" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.408063 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.409382 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:00.909366071 +0000 UTC m=+145.475669895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.438702 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" event={"ID":"3c2dcd5a-96f0-48ff-a004-9764d24b66b1","Type":"ContainerStarted","Data":"658b712cfc2bd705261b67aae7bb0f201f65994e338ff510e87909761461e86a"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.453531 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" event={"ID":"fdc71eba-e354-4963-967a-7e1c908467b5","Type":"ContainerStarted","Data":"0699a016951b8f0efc2aa8997f0d6d40413693b73f802c855adceebdccee80aa"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.487960 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" event={"ID":"ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0","Type":"ContainerStarted","Data":"b1d81e81f66e6428e996ca7f686ef9e8908aa854c2e4a9d174818f49eb0d428e"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.488725 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.490252 4984 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wc7jt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.490358 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" podUID="ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.504792 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerStarted","Data":"d50bbcffbf98d16fce57cd7c81f40638192b3cecf76451eac0e5109332dde5b2"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.510901 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.512689 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.012665656 +0000 UTC m=+145.578969570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.525801 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" event={"ID":"362ed1a8-599d-44c5-bf2d-d9d7d69517e8","Type":"ContainerStarted","Data":"387b950d461625b54785706d9b2f1c1cb310861069c39459081584bcd5b1b466"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.537927 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" podStartSLOduration=123.537904453 podStartE2EDuration="2m3.537904453s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.47623555 +0000 UTC m=+145.042539374" watchObservedRunningTime="2026-01-30 10:14:00.537904453 +0000 UTC m=+145.104208277" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.538401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-b5gpb" event={"ID":"04dd150e-af11-495b-a44b-10cce42da55b","Type":"ContainerStarted","Data":"7c23a07be811936c7890f48faf43b6c3e725278fa2d110b700a697eb033d36af"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.553811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" event={"ID":"50a9dda1-acf5-471f-a6cd-46e77a1dfa24","Type":"ContainerStarted","Data":"5519abc75e71a338677d01638f405658fca16ec07102cc2b55d862bdf15b61b0"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.554910 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.565081 4984 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6kcss container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.567016 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" podUID="50a9dda1-acf5-471f-a6cd-46e77a1dfa24" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.589408 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" event={"ID":"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10","Type":"ContainerStarted","Data":"a3161b1f3d59aefbc06ae9fe97490a9eece99847ea063ad7ab287ddaa666fbf1"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.591377 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" event={"ID":"5d031ce5-81d8-4a93-8ef6-a97a86e06195","Type":"ContainerStarted","Data":"f9c17c69f7653da6810adb58a1c7427835a5870597005f5e5fd247c61d834542"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.618771 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.626276 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.126240051 +0000 UTC m=+145.692543875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.629174 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" podStartSLOduration=123.62915174 podStartE2EDuration="2m3.62915174s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.617231155 +0000 UTC m=+145.183534979" watchObservedRunningTime="2026-01-30 10:14:00.62915174 +0000 UTC m=+145.195455564" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.630369 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-tnwfs"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.678497 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerStarted","Data":"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607"} Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.700767 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h9smt"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.704710 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.715095 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzff9"] Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.732669 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.733861 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.233848923 +0000 UTC m=+145.800152847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.734009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerStarted","Data":"ec2d22de67b56a877f06438f63a967e0f4c4b09fd390d26e87379805202f3828"} Jan 30 10:14:00 crc kubenswrapper[4984]: W0130 10:14:00.739331 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b635e15_1e86_4142_8e1d_c26628aa2403.slice/crio-a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65 WatchSource:0}: Error finding container a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65: Status 404 returned error can't find the container with id a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65 Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.753615 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.754556 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k55nt" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.833966 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.835946 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.335896976 +0000 UTC m=+145.902200820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.861488 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" podStartSLOduration=123.861456723 podStartE2EDuration="2m3.861456723s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.789653706 +0000 UTC m=+145.355957530" watchObservedRunningTime="2026-01-30 10:14:00.861456723 +0000 UTC m=+145.427760567" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.892817 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jc8ph" podStartSLOduration=123.892799767 podStartE2EDuration="2m3.892799767s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.892036511 +0000 UTC m=+145.458340345" watchObservedRunningTime="2026-01-30 10:14:00.892799767 +0000 UTC m=+145.459103591" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.926151 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b8xqj" podStartSLOduration=123.926131628 podStartE2EDuration="2m3.926131628s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.921229812 +0000 UTC m=+145.487533646" watchObservedRunningTime="2026-01-30 10:14:00.926131628 +0000 UTC m=+145.492435452" Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.936946 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:00 crc kubenswrapper[4984]: E0130 10:14:00.937364 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.437349759 +0000 UTC m=+146.003653583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:00 crc kubenswrapper[4984]: I0130 10:14:00.959517 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-b5gpb" podStartSLOduration=123.95950138 podStartE2EDuration="2m3.95950138s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:00.957469221 +0000 UTC m=+145.523773045" watchObservedRunningTime="2026-01-30 10:14:00.95950138 +0000 UTC m=+145.525805194" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.038834 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.039301 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.539258727 +0000 UTC m=+146.105562551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.054011 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" podStartSLOduration=125.053992637 podStartE2EDuration="2m5.053992637s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.053256262 +0000 UTC m=+145.619560096" watchObservedRunningTime="2026-01-30 10:14:01.053992637 +0000 UTC m=+145.620296471" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.091983 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" podStartSLOduration=124.091966896 podStartE2EDuration="2m4.091966896s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.090385302 +0000 UTC m=+145.656689126" watchObservedRunningTime="2026-01-30 10:14:01.091966896 +0000 UTC m=+145.658270720" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.140986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.141291 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.641279369 +0000 UTC m=+146.207583193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.186224 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" podStartSLOduration=124.186203374 podStartE2EDuration="2m4.186203374s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.139225869 +0000 UTC m=+145.705529693" watchObservedRunningTime="2026-01-30 10:14:01.186203374 +0000 UTC m=+145.752507198" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.186728 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" podStartSLOduration=124.186723431 podStartE2EDuration="2m4.186723431s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.185476209 +0000 UTC m=+145.751780033" watchObservedRunningTime="2026-01-30 10:14:01.186723431 +0000 UTC m=+145.753027255" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.247137 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.247231 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-47sww" podStartSLOduration=124.247222064 podStartE2EDuration="2m4.247222064s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.246690926 +0000 UTC m=+145.812994750" watchObservedRunningTime="2026-01-30 10:14:01.247222064 +0000 UTC m=+145.813525888" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.248073 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.748060843 +0000 UTC m=+146.314364667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.320783 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cw4qh" podStartSLOduration=124.3207618 podStartE2EDuration="2m4.3207618s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.276495628 +0000 UTC m=+145.842799452" watchObservedRunningTime="2026-01-30 10:14:01.3207618 +0000 UTC m=+145.887065624" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.321259 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" podStartSLOduration=124.321252497 podStartE2EDuration="2m4.321252497s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.318951179 +0000 UTC m=+145.885255003" watchObservedRunningTime="2026-01-30 10:14:01.321252497 +0000 UTC m=+145.887556321" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.349000 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.349426 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.849410232 +0000 UTC m=+146.415714056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.366479 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.380677 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:01 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:01 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:01 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.380729 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.452708 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.453063 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:01.953049259 +0000 UTC m=+146.519353083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.554933 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.555204 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.055193936 +0000 UTC m=+146.621497760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.656284 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.657007 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.1569855 +0000 UTC m=+146.723289324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.758008 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.758381 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.258368221 +0000 UTC m=+146.824672045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.781468 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerStarted","Data":"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.782091 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.794338 4984 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-59vj6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.795308 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.795078 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" event={"ID":"41fed1a2-7c34-4363-bad0-ac0740961cad","Type":"ContainerStarted","Data":"62c7ff7576e10aa88d6094fb824d2beb203320b96e4cc6756e12453560436b88"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.810939 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" event={"ID":"3294dd98-dfda-4f40-bdd8-ad0b8932432d","Type":"ContainerStarted","Data":"03f401ff4572dc2177274e4915ef8233351fde6f62f7b9de1ca9989292a1c703"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.817857 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" event={"ID":"74c9e5fc-e679-408d-ab8e-aab60ca942e9","Type":"ContainerStarted","Data":"92fa44b7b6cb02794397be0e23d525b02239f08cc8c1827cde537a402c4e24a7"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.817901 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" event={"ID":"74c9e5fc-e679-408d-ab8e-aab60ca942e9","Type":"ContainerStarted","Data":"8306bfeb06d314e7b017aee7dae03484d367991e1b2e3816187b786c2073b255"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.824285 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"361331219b350506ddf6d687f4962f0883845f4054d4451565087aa1ea6dec90"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.857515 4984 generic.go:334] "Generic (PLEG): container finished" podID="549b3b6c-e68d-4da4-8780-643fdbf7e4c9" containerID="5ab984a1154eb5ad3a9a3604f83049396ded07ecdfb0542d40af96126a13ab2e" exitCode=0 Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.857843 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" event={"ID":"549b3b6c-e68d-4da4-8780-643fdbf7e4c9","Type":"ContainerDied","Data":"5ab984a1154eb5ad3a9a3604f83049396ded07ecdfb0542d40af96126a13ab2e"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.859088 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.861114 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.361095357 +0000 UTC m=+146.927399181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.865455 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" event={"ID":"fdc71eba-e354-4963-967a-7e1c908467b5","Type":"ContainerStarted","Data":"ef3960325150cd358686864330f0c6b777279c85b1fc615bd6b4b1b6f7ec2df7"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.882315 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" event={"ID":"a7d1bcbb-a6ac-4a3a-b291-8cfc8ce61b10","Type":"ContainerStarted","Data":"aabbc58250375bc5f55d788a4db33ac17e6bdf40642f38d394e4d0be27056f06"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.887418 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mbkzc" podStartSLOduration=124.887397109 podStartE2EDuration="2m4.887397109s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.883458956 +0000 UTC m=+146.449762780" watchObservedRunningTime="2026-01-30 10:14:01.887397109 +0000 UTC m=+146.453700933" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.887660 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" podStartSLOduration=124.887646238 podStartE2EDuration="2m4.887646238s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.82611092 +0000 UTC m=+146.392414754" watchObservedRunningTime="2026-01-30 10:14:01.887646238 +0000 UTC m=+146.453950052" Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.888906 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwfs" event={"ID":"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68","Type":"ContainerStarted","Data":"27ea6c50669b8778e541bc64a8b74fb97f659dd744319633faae8d0a0a46df22"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.926821 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" event={"ID":"1f9e5765-1adb-417b-abbc-82c398a424a2","Type":"ContainerStarted","Data":"1ecbdb8ae21a09346a5b98a17ebe388815e843d7fba69d7eb47baedb83c1fbd1"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.956362 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" event={"ID":"362ed1a8-599d-44c5-bf2d-d9d7d69517e8","Type":"ContainerStarted","Data":"a225fd87f86254c5a432f543489cbc555ca160013e66067199b33297621d69db"} Jan 30 10:14:01 crc kubenswrapper[4984]: I0130 10:14:01.980442 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:01 crc kubenswrapper[4984]: E0130 10:14:01.984801 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.484782784 +0000 UTC m=+147.051086608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.002397 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" event={"ID":"b8fd0694-7375-4f0f-8cf1-84af752803b6","Type":"ContainerStarted","Data":"b485f6f7c6190f154740f68b4417664590c5b58ea29c2a4671113230856baab8"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.003474 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zl47s" podStartSLOduration=125.003455558 podStartE2EDuration="2m5.003455558s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.002741974 +0000 UTC m=+146.569045798" watchObservedRunningTime="2026-01-30 10:14:02.003455558 +0000 UTC m=+146.569759382" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.003932 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kgt7t" podStartSLOduration=126.003910133 podStartE2EDuration="2m6.003910133s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:01.929649143 +0000 UTC m=+146.495952957" watchObservedRunningTime="2026-01-30 10:14:02.003910133 +0000 UTC m=+146.570213957" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.005672 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerStarted","Data":"b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.056984 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tm6bv" event={"ID":"e9a01c47-eab2-4990-a659-a1f15a8176dd","Type":"ContainerStarted","Data":"8d3cfa56fa77b180b141579185dcc64fe220c1a1d586a3d6de43c2f635115011"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.057312 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tm6bv" event={"ID":"e9a01c47-eab2-4990-a659-a1f15a8176dd","Type":"ContainerStarted","Data":"fdb7a6a725360fa79f97561c6ba28587f8b87d80741083b568bdbb89ce461b6b"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.066913 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" event={"ID":"1d267eea-0fb6-4471-89b8-0de23f0a5873","Type":"ContainerStarted","Data":"fbc4e3c07cfdea4da11f5f99b46fa46449c85ad088ad5487cb4736ba61736857"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.066965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" event={"ID":"1d267eea-0fb6-4471-89b8-0de23f0a5873","Type":"ContainerStarted","Data":"54e5ed88926849caedd1613baf95128e3c720a8e4ac5fec265c0105b7a4d2461"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.069754 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k9xrn" event={"ID":"5f7fb8a3-2517-48bf-9a10-82725a7391cb","Type":"ContainerStarted","Data":"db22e08b1375cbe91c0c6691ee5283a52b3cdcdd23ece840b610c8292748b312"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.076488 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"55eaba56218414e484072cae8971ca1d79d72d34071e7b9195611d687822bbb7"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.087115 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.088588 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.588568156 +0000 UTC m=+147.154871980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.118470 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sz5wt" podStartSLOduration=125.11844821 podStartE2EDuration="2m5.11844821s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.086692713 +0000 UTC m=+146.652996537" watchObservedRunningTime="2026-01-30 10:14:02.11844821 +0000 UTC m=+146.684752024" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.121210 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g5m7t" event={"ID":"3c2dcd5a-96f0-48ff-a004-9764d24b66b1","Type":"ContainerStarted","Data":"abe2c9aaa9e673f7aff58c200f88224065d1a83cc592682f1c5c1ab56a634d63"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.145500 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" event={"ID":"689169b7-2cad-4763-9b8d-fdb50126ec69","Type":"ContainerStarted","Data":"894abc265f760e03c040845c77138ba87077a2802e9bced39e83f05aed031205"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.186463 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vqg9w" podStartSLOduration=125.186441078 podStartE2EDuration="2m5.186441078s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.174877345 +0000 UTC m=+146.741181169" watchObservedRunningTime="2026-01-30 10:14:02.186441078 +0000 UTC m=+146.752744902" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.191333 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.193948 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" event={"ID":"7b635e15-1e86-4142-8e1d-c26628aa2403","Type":"ContainerStarted","Data":"a784ebc909a5e20cb706651f5e2a69e9f5a4ecc1bb4f269084b4cb966d0a7f65"} Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.195111 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.695099702 +0000 UTC m=+147.261403526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.221083 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kc9w4" podStartSLOduration=125.221069053 podStartE2EDuration="2m5.221069053s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.220037298 +0000 UTC m=+146.786341122" watchObservedRunningTime="2026-01-30 10:14:02.221069053 +0000 UTC m=+146.787372877" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.238795 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerStarted","Data":"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.262760 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerStarted","Data":"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.263961 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.265247 4984 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lf7j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.265293 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.292778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.293875 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.793859873 +0000 UTC m=+147.360163697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.297770 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerStarted","Data":"659c778caa3e998286e950e5d885087dc705ae499832fe2cf9924d02ed342f9f"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.325064 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mtldg" podStartSLOduration=125.325042921 podStartE2EDuration="2m5.325042921s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.262503219 +0000 UTC m=+146.828807043" watchObservedRunningTime="2026-01-30 10:14:02.325042921 +0000 UTC m=+146.891346745" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.329250 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tm6bv" podStartSLOduration=7.329236414 podStartE2EDuration="7.329236414s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.326987937 +0000 UTC m=+146.893291761" watchObservedRunningTime="2026-01-30 10:14:02.329236414 +0000 UTC m=+146.895540238" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.329769 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-k9xrn" podStartSLOduration=7.329763211 podStartE2EDuration="7.329763211s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.298810981 +0000 UTC m=+146.865114805" watchObservedRunningTime="2026-01-30 10:14:02.329763211 +0000 UTC m=+146.896067035" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.335514 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" event={"ID":"477b0c18-df7c-46c8-bae3-d0dda1af580c","Type":"ContainerStarted","Data":"5b097930b0a77fb7cba6077c842b0603a915895cc9d97f95fb07f5eaabb800cf"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.335554 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" event={"ID":"477b0c18-df7c-46c8-bae3-d0dda1af580c","Type":"ContainerStarted","Data":"f5fa7eaf7427770821c1d1a4aa6a34a00402375e2d2121f48073834f41a6f193"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.360992 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gb94b" podStartSLOduration=125.360976081 podStartE2EDuration="2m5.360976081s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.360000458 +0000 UTC m=+146.926304282" watchObservedRunningTime="2026-01-30 10:14:02.360976081 +0000 UTC m=+146.927279905" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.384668 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" event={"ID":"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0","Type":"ContainerStarted","Data":"5f78f999a45e026532cff91c591535f5cde36cc8ad11d2e857b7e9de6f79e4c9"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.393615 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:02 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:02 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:02 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.393666 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.394403 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.397643 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.897630765 +0000 UTC m=+147.463934589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.435057 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" event={"ID":"91c03f30-b334-480b-937d-15b6d0b493a7","Type":"ContainerStarted","Data":"99230cb91a992a249fb9a2383a85874916b90baf09ff9dfc5c1ff9f683675c8e"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.435102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" event={"ID":"91c03f30-b334-480b-937d-15b6d0b493a7","Type":"ContainerStarted","Data":"0d52af09a1f62b57d67e6e9a1549447344849457cc426877b777ed6ef0eb72ca"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.476741 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" event={"ID":"19fa971c-228f-4457-81be-b2d9220ce27f","Type":"ContainerStarted","Data":"35596638ffd8fdd19d27c9adf74965850144dc178e41450f9ad3ace770880810"} Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.483344 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.483488 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.495531 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.496853 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:02.99681163 +0000 UTC m=+147.563115504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.530779 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6kcss" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.538694 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fczwn" podStartSLOduration=125.538672101 podStartE2EDuration="2m5.538672101s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.483093875 +0000 UTC m=+147.049397699" watchObservedRunningTime="2026-01-30 10:14:02.538672101 +0000 UTC m=+147.104975925" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.552171 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z7s9j" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.606949 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-v2prt" podStartSLOduration=125.606908367 podStartE2EDuration="2m5.606908367s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.60612967 +0000 UTC m=+147.172433494" watchObservedRunningTime="2026-01-30 10:14:02.606908367 +0000 UTC m=+147.173212191" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.607796 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.608983 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podStartSLOduration=125.608968667 podStartE2EDuration="2m5.608968667s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.544319203 +0000 UTC m=+147.110623037" watchObservedRunningTime="2026-01-30 10:14:02.608968667 +0000 UTC m=+147.175272491" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.617826 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.117779336 +0000 UTC m=+147.684083160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.705375 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" podStartSLOduration=125.705359148 podStartE2EDuration="2m5.705359148s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.704973495 +0000 UTC m=+147.271277319" watchObservedRunningTime="2026-01-30 10:14:02.705359148 +0000 UTC m=+147.271662972" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.707409 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" podStartSLOduration=125.707404217 podStartE2EDuration="2m5.707404217s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.647505254 +0000 UTC m=+147.213809078" watchObservedRunningTime="2026-01-30 10:14:02.707404217 +0000 UTC m=+147.273708041" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.709763 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.709985 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.209952314 +0000 UTC m=+147.776256148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.710060 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.710483 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.210471541 +0000 UTC m=+147.776775445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.812857 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.813625 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.313608051 +0000 UTC m=+147.879911875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.818330 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.818610 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" podStartSLOduration=125.81859369 podStartE2EDuration="2m5.81859369s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:02.815065401 +0000 UTC m=+147.381369225" watchObservedRunningTime="2026-01-30 10:14:02.81859369 +0000 UTC m=+147.384897514" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.819388 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.823776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.856078 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915657 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915690 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:02 crc kubenswrapper[4984]: I0130 10:14:02.915802 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:02 crc kubenswrapper[4984]: E0130 10:14:02.916498 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.416486023 +0000 UTC m=+147.982789847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.003404 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.003469 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.020675 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.020937 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021070 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021101 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.021395 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.521370242 +0000 UTC m=+148.087674066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021894 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.021992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.048858 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.049971 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.069150 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.086621 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.121936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.122339 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.622323548 +0000 UTC m=+148.188627372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.137424 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"community-operators-8cnkg\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.155382 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.208459 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.209370 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.222940 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.223198 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.223262 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.223301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.223396 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.723380307 +0000 UTC m=+148.289684131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.312230 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325549 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325586 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325605 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325647 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.325694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.326005 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.326358 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.826347012 +0000 UTC m=+148.392650826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.326930 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.327216 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.394972 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"certified-operators-w4cgz\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.396519 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.405049 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.412103 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:03 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:03 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:03 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.412154 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.418984 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428036 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428314 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428390 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.428428 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.428481 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:03.928456737 +0000 UTC m=+148.494760571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.429688 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.429803 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.458934 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"community-operators-dk77x\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.485053 4984 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wc7jt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.485100 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" podUID="ddc4eab7-7eea-403d-aeb0-d00bf1e1d1a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.506550 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gm2ht" event={"ID":"7b635e15-1e86-4142-8e1d-c26628aa2403","Type":"ContainerStarted","Data":"139c935d022e110cca3f17545792f356a511e9802b101812f881010ad59bef14"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.512161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mgwkp" event={"ID":"535d0d2c-cbc5-4b3d-8c1f-1a2451f9cec0","Type":"ContainerStarted","Data":"fcb97bf33d069b8cf02220c164c204c2f9a6b32968c22071a9fb399b64f2f155"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.518486 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwfs" event={"ID":"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68","Type":"ContainerStarted","Data":"1f92843512b822c91ad12848a5264deee50c474ed31ae69b734c1bed3b655aa3"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.518516 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-tnwfs" event={"ID":"87dbb26d-b1c4-4a8f-b2b6-64e39edadd68","Type":"ContainerStarted","Data":"fadb0631e77c9a49aef94f56ac428cc8611d35c5973e9214484a2c03d99c334d"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.519078 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.523369 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" event={"ID":"b8fd0694-7375-4f0f-8cf1-84af752803b6","Type":"ContainerStarted","Data":"85b351802096b6ac17dabed2e20ea9fa0e83751307a3586fd3de7c6fb45b836e"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.523396 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" event={"ID":"b8fd0694-7375-4f0f-8cf1-84af752803b6","Type":"ContainerStarted","Data":"9b9bce640c20806c5b47bce08964584215fed903de8ecd707db12c742aab15b0"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.523868 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.530920 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.530959 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.531005 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.531022 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.531300 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.031288557 +0000 UTC m=+148.597592381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547208 4984 generic.go:334] "Generic (PLEG): container finished" podID="19fa971c-228f-4457-81be-b2d9220ce27f" containerID="8c4a288bbf3ca8d659e3baa40a24d3c4957909b7c5275ae792a692c37d549c9b" exitCode=0 Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547307 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" event={"ID":"19fa971c-228f-4457-81be-b2d9220ce27f","Type":"ContainerDied","Data":"8c4a288bbf3ca8d659e3baa40a24d3c4957909b7c5275ae792a692c37d549c9b"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547336 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" event={"ID":"19fa971c-228f-4457-81be-b2d9220ce27f","Type":"ContainerStarted","Data":"9408d0452c9224851a6729b0f77776884723b08217d286d0acfe04e0eec03974"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547856 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.547952 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.568605 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-tnwfs" podStartSLOduration=8.568584392 podStartE2EDuration="8.568584392s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.542488107 +0000 UTC m=+148.108791931" watchObservedRunningTime="2026-01-30 10:14:03.568584392 +0000 UTC m=+148.134888216" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.569583 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" podStartSLOduration=126.569575836 podStartE2EDuration="2m6.569575836s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.564207914 +0000 UTC m=+148.130511738" watchObservedRunningTime="2026-01-30 10:14:03.569575836 +0000 UTC m=+148.135879660" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.578068 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" event={"ID":"41fed1a2-7c34-4363-bad0-ac0740961cad","Type":"ContainerStarted","Data":"60c291e23734e20d25712ba6ef2a8740692c5179714a2fde29d37cd9fe11106e"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.594671 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" podStartSLOduration=126.594633696 podStartE2EDuration="2m6.594633696s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.591687216 +0000 UTC m=+148.157991040" watchObservedRunningTime="2026-01-30 10:14:03.594633696 +0000 UTC m=+148.160937520" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.605578 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"ebe3ef114cbb4782c56c4ed7042911f23e7df193e419a7215250a01f79b2e14b"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.609967 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" event={"ID":"549b3b6c-e68d-4da4-8780-643fdbf7e4c9","Type":"ContainerStarted","Data":"4a571e6f19403b2302694f635424d2d4d26ff0fd7f60fabcf4b66a3ab4356616"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.617856 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-j6cv2" podStartSLOduration=126.617839474 podStartE2EDuration="2m6.617839474s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.614800811 +0000 UTC m=+148.181104635" watchObservedRunningTime="2026-01-30 10:14:03.617839474 +0000 UTC m=+148.184143298" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.636857 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637010 4984 generic.go:334] "Generic (PLEG): container finished" podID="3cb637fe-7a94-4790-abf9-3beb38ecb8da" containerID="284055d70f4b9c5e29f2b0a0e012e6be97affdaba213aca86b1c2703e3eb6309" exitCode=0 Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637119 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerDied","Data":"284055d70f4b9c5e29f2b0a0e012e6be97affdaba213aca86b1c2703e3eb6309"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637142 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637153 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerStarted","Data":"ec2206fbe3e9720b633cbd58dd1bf1a7409869b6990e0f322d4c2d92b687acb6"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637166 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" event={"ID":"3cb637fe-7a94-4790-abf9-3beb38ecb8da","Type":"ContainerStarted","Data":"74f668e3f0f9dd5f82298b56b2689255a738ad0f5dbd9f39555596f07229bc56"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.637349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.648165 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.148141612 +0000 UTC m=+148.714445436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.648575 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.649118 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.653509 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" podStartSLOduration=126.653491274 podStartE2EDuration="2m6.653491274s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.65013104 +0000 UTC m=+148.216434874" watchObservedRunningTime="2026-01-30 10:14:03.653491274 +0000 UTC m=+148.219795098" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.672959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kmfcq" event={"ID":"91c03f30-b334-480b-937d-15b6d0b493a7","Type":"ContainerStarted","Data":"4f4c54755db5e790fb5039c31a88da62c24a44ae9e57e0995ab21128b0e61464"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.679943 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.696160 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.697269 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"certified-operators-njw8t\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.709507 4984 csr.go:261] certificate signing request csr-br585 is approved, waiting to be issued Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.720043 4984 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lf7j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.720090 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.723029 4984 csr.go:257] certificate signing request csr-br585 is issued Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.735344 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.735393 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"ccbccd64af542f90ca0aef149e38ed1030deb5adc4ad6b563b7730a8d68baa45"} Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.735628 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.736071 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" podStartSLOduration=126.736055856 podStartE2EDuration="2m6.736055856s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.711017586 +0000 UTC m=+148.277321410" watchObservedRunningTime="2026-01-30 10:14:03.736055856 +0000 UTC m=+148.302359670" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.738798 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.742734 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.242713062 +0000 UTC m=+148.809016886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.755439 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wc7jt" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.788382 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" podStartSLOduration=126.788367711 podStartE2EDuration="2m6.788367711s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:03.739848144 +0000 UTC m=+148.306151968" watchObservedRunningTime="2026-01-30 10:14:03.788367711 +0000 UTC m=+148.354671535" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.839396 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.840569 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.340545941 +0000 UTC m=+148.906849755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.840637 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.841471 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.341463352 +0000 UTC m=+148.907767166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.854557 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.943994 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.944520 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.444496648 +0000 UTC m=+149.010800472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:03 crc kubenswrapper[4984]: I0130 10:14:03.961876 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:03 crc kubenswrapper[4984]: E0130 10:14:03.962186 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.462174848 +0000 UTC m=+149.028478672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.064159 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.064564 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.564547652 +0000 UTC m=+149.130851476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.165851 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.166314 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.666294115 +0000 UTC m=+149.232598009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.185890 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:14:04 crc kubenswrapper[4984]: W0130 10:14:04.221017 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874a87b2_c81a_4ce9_85c6_c41d18835f35.slice/crio-d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2 WatchSource:0}: Error finding container d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2: Status 404 returned error can't find the container with id d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2 Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.275474 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.276024 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.776008059 +0000 UTC m=+149.342311883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.369789 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:04 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:04 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:04 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.369849 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.376774 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.377095 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.877079979 +0000 UTC m=+149.443383803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.401406 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.477588 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.478017 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.977556418 +0000 UTC m=+149.543860242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.478050 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.478474 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:04.978451049 +0000 UTC m=+149.544754873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.541416 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.541478 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.543367 4984 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fzff9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.543422 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" podUID="3cb637fe-7a94-4790-abf9-3beb38ecb8da" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.21:8443/livez\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.579707 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.579869 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.079843019 +0000 UTC m=+149.646146843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.580054 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.580445 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.080433039 +0000 UTC m=+149.646736863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.638498 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:14:04 crc kubenswrapper[4984]: W0130 10:14:04.642588 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33689f3c_1867_4707_a8c2_ed56c467cff6.slice/crio-b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b WatchSource:0}: Error finding container b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b: Status 404 returned error can't find the container with id b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.681014 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.681191 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.181165878 +0000 UTC m=+149.747469702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.681348 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.681686 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.181672385 +0000 UTC m=+149.747976209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.722504 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerStarted","Data":"b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723268 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerStarted","Data":"d624716dec815a31dc6fb1b18652f2e1a4591d64f410b4c644c4fb229fcd424e"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723787 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 10:09:03 +0000 UTC, rotation deadline is 2026-12-18 07:44:20.218263615 +0000 UTC Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723840 4984 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7725h30m15.494426919s for next certificate rotation Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.723972 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerStarted","Data":"d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.725127 4984 generic.go:334] "Generic (PLEG): container finished" podID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" exitCode=0 Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.725229 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.725287 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerStarted","Data":"ff00561d64a6b687fd04a281bc0b10957180facce6b051e1ab6f63d8c0e3e399"} Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.726538 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.783345 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.783744 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.283730039 +0000 UTC m=+149.850033863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.820204 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.821718 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.824124 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.830785 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.884947 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.892399 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.392384496 +0000 UTC m=+149.958688320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.986813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.986958 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.987019 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.486988426 +0000 UTC m=+150.053292300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.987262 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.987349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:04 crc kubenswrapper[4984]: I0130 10:14:04.987606 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:04 crc kubenswrapper[4984]: E0130 10:14:04.987698 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.48768643 +0000 UTC m=+150.053990334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.088706 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.088905 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.588876844 +0000 UTC m=+150.155180678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.088970 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089009 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089035 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089068 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089233 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.089402 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.589388381 +0000 UTC m=+150.155692205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089572 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.089700 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.094922 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.095106 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.095319 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.106817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"redhat-marketplace-9vv7r\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.108835 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.113703 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.130138 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.144695 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.190455 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.190637 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.690607856 +0000 UTC m=+150.256911680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.190713 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.190785 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.191051 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.691040331 +0000 UTC m=+150.257344255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.197825 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cec0ee98-d570-417f-a2fb-7ac19e3b25c0-metrics-certs\") pod \"network-metrics-daemon-sdmkd\" (UID: \"cec0ee98-d570-417f-a2fb-7ac19e3b25c0\") " pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.204329 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.205220 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.218319 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.259531 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.292299 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.292461 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.792431592 +0000 UTC m=+150.358735416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.292853 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.293230 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.793206628 +0000 UTC m=+150.359510452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.317151 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sdmkd" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.368768 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:05 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:05 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:05 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.368822 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396558 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396815 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.396904 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.397203 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:05.897189057 +0000 UTC m=+150.463492881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499193 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499226 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499275 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499672 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.499872 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.500365 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.000354588 +0000 UTC m=+150.566658402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.518629 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"redhat-marketplace-b9zx8\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.577533 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.592855 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.600488 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.600850 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.100835368 +0000 UTC m=+150.667139192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.662725 4984 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.701926 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.702530 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.202513879 +0000 UTC m=+150.768817703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.705473 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sdmkd"] Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.735250 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bda5ab0954820602ce380a42cd1e8b0b38108cdbead38acf61bf495e86970ba1"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.783866 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"f8f7cfe79fef7ae968f236d572066f31a2aa493df81e9080bec8b804f970e9d4"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.784038 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"97fd18183bcfdbb0e61d75bea2054aa48b8548cbfb4a2e4b037d612035360967"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.788747 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerStarted","Data":"4aa1ead20f9be6ec24d5528456caa32578bf134deec9e2dc8d7d858e101255c0"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.803643 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.803865 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.303832887 +0000 UTC m=+150.870136711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.803954 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.804607 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.304596573 +0000 UTC m=+150.870900397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.811230 4984 generic.go:334] "Generic (PLEG): container finished" podID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" exitCode=0 Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.811308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.814127 4984 generic.go:334] "Generic (PLEG): container finished" podID="b628557d-490d-4803-8ae3-fde88678c6a4" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" exitCode=0 Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.814161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.818702 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b3315c76b287e825e2d6771f21ffd188461e9cad424dfec0c9a1df41afb52803"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.818943 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.833761 4984 generic.go:334] "Generic (PLEG): container finished" podID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" exitCode=0 Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.835100 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7"} Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.860330 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p6b4g" Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.907779 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.908552 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.40852598 +0000 UTC m=+150.974829804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.924692 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:05 crc kubenswrapper[4984]: E0130 10:14:05.933547 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.433508998 +0000 UTC m=+150.999812822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:05 crc kubenswrapper[4984]: I0130 10:14:05.941923 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.025930 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.027712 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.527695364 +0000 UTC m=+151.093999188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.127617 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.127887 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.627872583 +0000 UTC m=+151.194176407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.201900 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.204587 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.205100 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.206431 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.229737 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.230169 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.730131664 +0000 UTC m=+151.296435488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.230561 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: E0130 10:14:06.230977 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 10:14:06.730969282 +0000 UTC m=+151.297273106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv7sn" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.259432 4984 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T10:14:05.662746189Z","Handler":null,"Name":""} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.261848 4984 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.261890 4984 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331368 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331710 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331765 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.331815 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.335489 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.370958 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:06 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:06 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:06 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.371019 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.433953 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434043 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434124 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434166 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434572 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.434602 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.437504 4984 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.437556 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.456153 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"redhat-operators-dc27n\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.462725 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv7sn\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.583739 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.592390 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.593710 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.601122 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.707223 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.738429 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.738477 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.738519 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.841661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.841838 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.841875 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.842527 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.842547 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.851452 4984 generic.go:334] "Generic (PLEG): container finished" podID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerID="b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a" exitCode=0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.851539 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerDied","Data":"b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.862501 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"redhat-operators-vzmvg\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.884975 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" event={"ID":"48ae7d4f-38b1-40c0-ad61-815992265930","Type":"ContainerStarted","Data":"05fec0f570dac27c27647b10ad3ced36b18b0435ab601e2a61ac806860f3e111"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.893899 4984 generic.go:334] "Generic (PLEG): container finished" podID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" exitCode=0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.893963 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.898604 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" event={"ID":"cec0ee98-d570-417f-a2fb-7ac19e3b25c0","Type":"ContainerStarted","Data":"74f7d28a0e08be422a754d690835c9c115cca8bbbefc3a7dbf062e3a55be3cc7"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.898682 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" event={"ID":"cec0ee98-d570-417f-a2fb-7ac19e3b25c0","Type":"ContainerStarted","Data":"c1c0be0897c74e1e5bfac7e6f04a7470b6e71b54d2fad486fff495474bb80321"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.898693 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sdmkd" event={"ID":"cec0ee98-d570-417f-a2fb-7ac19e3b25c0","Type":"ContainerStarted","Data":"b1afca7b810b63c91ec3da8b1b91ffda7f897669d0a1939e818f008318abe3cb"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.900460 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e217a7c14d9c317dc8379cac46710f87e935131cc71b01134b24c83dfa7adf0f"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.902300 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"159dd998b9dd4a12a42f6798a58a9e84c28bee47d9ddf7c7ac1aae5f3aaad36e"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.905418 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" podStartSLOduration=11.90540608 podStartE2EDuration="11.90540608s" podCreationTimestamp="2026-01-30 10:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:06.904599123 +0000 UTC m=+151.470902947" watchObservedRunningTime="2026-01-30 10:14:06.90540608 +0000 UTC m=+151.471709904" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.907555 4984 generic.go:334] "Generic (PLEG): container finished" podID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" exitCode=0 Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.907662 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.907694 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerStarted","Data":"a250f9ff70a0c171ec7466406231de39859f7a4b3ffd9f714f62126a8f50f17b"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.912877 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"746156d6ab9e2462688e32420a534fbded046e889a3daef751744861b3e18a38"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.912912 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7038f5db64c7b4198b177efa9586a34734ab4d5fb4cd45f1556d88899fdaeb91"} Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.922632 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:14:06 crc kubenswrapper[4984]: I0130 10:14:06.945323 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sdmkd" podStartSLOduration=130.945302654 podStartE2EDuration="2m10.945302654s" podCreationTimestamp="2026-01-30 10:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:06.93870789 +0000 UTC m=+151.505011714" watchObservedRunningTime="2026-01-30 10:14:06.945302654 +0000 UTC m=+151.511606478" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.054196 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:14:07 crc kubenswrapper[4984]: W0130 10:14:07.101904 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94ba287c_b444_471f_8be9_e1c553ee251e.slice/crio-1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01 WatchSource:0}: Error finding container 1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01: Status 404 returned error can't find the container with id 1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01 Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.208009 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.213003 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.215613 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.218654 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.218813 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.270104 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.355224 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.355630 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.368535 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:07 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:07 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:07 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.368588 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.457114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.457203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.459181 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.475038 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.500991 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:14:07 crc kubenswrapper[4984]: W0130 10:14:07.511778 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccdbdcd6_0816_4dc6_bdb2_e3088376d3fe.slice/crio-885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455 WatchSource:0}: Error finding container 885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455: Status 404 returned error can't find the container with id 885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455 Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.541885 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.944561 4984 generic.go:334] "Generic (PLEG): container finished" podID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" exitCode=0 Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.944744 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.944906 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerStarted","Data":"885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.979044 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerStarted","Data":"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.979088 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerStarted","Data":"79705b85e33c0776d034e28c0f0671763dc639d3eee8637beef1fb06cd051685"} Jan 30 10:14:07 crc kubenswrapper[4984]: I0130 10:14:07.979108 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.004518 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" podStartSLOduration=131.004502158 podStartE2EDuration="2m11.004502158s" podCreationTimestamp="2026-01-30 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:08.002074536 +0000 UTC m=+152.568378360" watchObservedRunningTime="2026-01-30 10:14:08.004502158 +0000 UTC m=+152.570805982" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.017490 4984 generic.go:334] "Generic (PLEG): container finished" podID="94ba287c-b444-471f-8be9-e1c553ee251e" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" exitCode=0 Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.018552 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193"} Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.018580 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerStarted","Data":"1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01"} Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.049926 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.049983 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.050226 4984 patch_prober.go:28] interesting pod/downloads-7954f5f757-jc8ph container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.050269 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jc8ph" podUID="53f7d13c-e0e5-47cd-b819-8ad8e6e1e761" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.073690 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.123672 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.124501 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.124523 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.130421 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.342492 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.342837 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.348071 4984 patch_prober.go:28] interesting pod/console-f9d7485db-v2prt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.348128 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v2prt" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.365692 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.376077 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:08 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:08 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:08 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.376706 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.416915 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.456208 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.579548 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") pod \"fbdde9dd-69cf-405d-9143-1739e3acbdde\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.579817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") pod \"fbdde9dd-69cf-405d-9143-1739e3acbdde\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.579856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") pod \"fbdde9dd-69cf-405d-9143-1739e3acbdde\" (UID: \"fbdde9dd-69cf-405d-9143-1739e3acbdde\") " Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.581169 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume" (OuterVolumeSpecName: "config-volume") pod "fbdde9dd-69cf-405d-9143-1739e3acbdde" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.588012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fbdde9dd-69cf-405d-9143-1739e3acbdde" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.588402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69" (OuterVolumeSpecName: "kube-api-access-dnv69") pod "fbdde9dd-69cf-405d-9143-1739e3acbdde" (UID: "fbdde9dd-69cf-405d-9143-1739e3acbdde"). InnerVolumeSpecName "kube-api-access-dnv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.681883 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fbdde9dd-69cf-405d-9143-1739e3acbdde-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.681932 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fbdde9dd-69cf-405d-9143-1739e3acbdde-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:08 crc kubenswrapper[4984]: I0130 10:14:08.681942 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnv69\" (UniqueName: \"kubernetes.io/projected/fbdde9dd-69cf-405d-9143-1739e3acbdde-kube-api-access-dnv69\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.041736 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerStarted","Data":"168ae950583372bca0265bccae669727488cb747b78f18d7f77c7b4c7e81d236"} Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.041778 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerStarted","Data":"f69e16f549627ffe3c74b15b970c9a739dca0bbacd25c33919f5b9e3ae398142"} Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.054242 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" event={"ID":"fbdde9dd-69cf-405d-9143-1739e3acbdde","Type":"ContainerDied","Data":"ec2d22de67b56a877f06438f63a967e0f4c4b09fd390d26e87379805202f3828"} Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.054317 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2d22de67b56a877f06438f63a967e0f4c4b09fd390d26e87379805202f3828" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.054392 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.065174 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.065152402 podStartE2EDuration="2.065152402s" podCreationTimestamp="2026-01-30 10:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:09.056967295 +0000 UTC m=+153.623271119" watchObservedRunningTime="2026-01-30 10:14:09.065152402 +0000 UTC m=+153.631456226" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.077356 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6hfpr" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.368033 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:09 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:09 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:09 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.368526 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.547524 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:09 crc kubenswrapper[4984]: I0130 10:14:09.564009 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fzff9" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.071122 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a839741-51fc-4340-8210-3c29bae228c0" containerID="168ae950583372bca0265bccae669727488cb747b78f18d7f77c7b4c7e81d236" exitCode=0 Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.071469 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerDied","Data":"168ae950583372bca0265bccae669727488cb747b78f18d7f77c7b4c7e81d236"} Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.366048 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:10 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:10 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:10 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.366121 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.738901 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 10:14:10 crc kubenswrapper[4984]: E0130 10:14:10.739163 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerName="collect-profiles" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.739177 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerName="collect-profiles" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.739308 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" containerName="collect-profiles" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.739693 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.740717 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.755039 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.755314 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.836224 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.836370 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.937470 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.937526 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.937646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:10 crc kubenswrapper[4984]: I0130 10:14:10.954604 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.083579 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.368053 4984 patch_prober.go:28] interesting pod/router-default-5444994796-b5gpb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 10:14:11 crc kubenswrapper[4984]: [-]has-synced failed: reason withheld Jan 30 10:14:11 crc kubenswrapper[4984]: [+]process-running ok Jan 30 10:14:11 crc kubenswrapper[4984]: healthz check failed Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.368437 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-b5gpb" podUID="04dd150e-af11-495b-a44b-10cce42da55b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 10:14:11 crc kubenswrapper[4984]: I0130 10:14:11.609563 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 10:14:12 crc kubenswrapper[4984]: I0130 10:14:12.367837 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:12 crc kubenswrapper[4984]: I0130 10:14:12.375718 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-b5gpb" Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.127097 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-t8vjw_a2849d59-5121-45c3-bf3c-41c83a87827c/cluster-samples-operator/0.log" Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.127150 4984 generic.go:334] "Generic (PLEG): container finished" podID="a2849d59-5121-45c3-bf3c-41c83a87827c" containerID="a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e" exitCode=2 Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.127318 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerDied","Data":"a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e"} Jan 30 10:14:13 crc kubenswrapper[4984]: I0130 10:14:13.128780 4984 scope.go:117] "RemoveContainer" containerID="a346a9827a841476aaa45784e89737d7660e442268d7e7f018b3d6a2ac74ea0e" Jan 30 10:14:14 crc kubenswrapper[4984]: I0130 10:14:14.099571 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-tnwfs" Jan 30 10:14:18 crc kubenswrapper[4984]: I0130 10:14:18.052705 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jc8ph" Jan 30 10:14:18 crc kubenswrapper[4984]: I0130 10:14:18.348863 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:18 crc kubenswrapper[4984]: I0130 10:14:18.352117 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:14:19 crc kubenswrapper[4984]: W0130 10:14:19.200389 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcb28ac48_0559_44f0_b620_ad0eae3e3efb.slice/crio-dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81 WatchSource:0}: Error finding container dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81: Status 404 returned error can't find the container with id dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81 Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.283156 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357449 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") pod \"3a839741-51fc-4340-8210-3c29bae228c0\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357502 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") pod \"3a839741-51fc-4340-8210-3c29bae228c0\" (UID: \"3a839741-51fc-4340-8210-3c29bae228c0\") " Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357709 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3a839741-51fc-4340-8210-3c29bae228c0" (UID: "3a839741-51fc-4340-8210-3c29bae228c0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.357910 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a839741-51fc-4340-8210-3c29bae228c0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.364828 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3a839741-51fc-4340-8210-3c29bae228c0" (UID: "3a839741-51fc-4340-8210-3c29bae228c0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:19 crc kubenswrapper[4984]: I0130 10:14:19.459029 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a839741-51fc-4340-8210-3c29bae228c0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.181987 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-t8vjw_a2849d59-5121-45c3-bf3c-41c83a87827c/cluster-samples-operator/0.log" Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.182313 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8vjw" event={"ID":"a2849d59-5121-45c3-bf3c-41c83a87827c","Type":"ContainerStarted","Data":"af122d465277324b26aa370cf9c30cad1ba1d9748b41f88e445b70b061317ab8"} Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.184173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerStarted","Data":"dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81"} Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.185744 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3a839741-51fc-4340-8210-3c29bae228c0","Type":"ContainerDied","Data":"f69e16f549627ffe3c74b15b970c9a739dca0bbacd25c33919f5b9e3ae398142"} Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.185771 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69e16f549627ffe3c74b15b970c9a739dca0bbacd25c33919f5b9e3ae398142" Jan 30 10:14:20 crc kubenswrapper[4984]: I0130 10:14:20.185839 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 10:14:26 crc kubenswrapper[4984]: I0130 10:14:26.715593 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:14:33 crc kubenswrapper[4984]: I0130 10:14:33.000545 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:14:33 crc kubenswrapper[4984]: I0130 10:14:33.000862 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:14:35 crc kubenswrapper[4984]: I0130 10:14:35.205876 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 10:14:36 crc kubenswrapper[4984]: E0130 10:14:36.813908 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 10:14:36 crc kubenswrapper[4984]: E0130 10:14:36.814483 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7xqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8cnkg_openshift-marketplace(4aab6e83-8a77-45ad-aa28-fe2c519133fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:36 crc kubenswrapper[4984]: E0130 10:14:36.815838 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8cnkg" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" Jan 30 10:14:38 crc kubenswrapper[4984]: I0130 10:14:38.796407 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7n8b9" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.409609 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8cnkg" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.484202 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.484600 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b9h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dk77x_openshift-marketplace(874a87b2-c81a-4ce9-85c6-c41d18835f35): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.486109 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dk77x" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.501350 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.501504 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hs7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9vv7r_openshift-marketplace(44e02fc4-8da4-4122-bd3a-9b8f9734ec59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:41 crc kubenswrapper[4984]: E0130 10:14:41.502715 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9vv7r" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" Jan 30 10:14:42 crc kubenswrapper[4984]: E0130 10:14:42.995002 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9vv7r" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" Jan 30 10:14:42 crc kubenswrapper[4984]: E0130 10:14:42.995011 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dk77x" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.069491 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.069633 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgrgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w4cgz_openshift-marketplace(b628557d-490d-4803-8ae3-fde88678c6a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.071681 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w4cgz" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.076642 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.076978 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fd8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-njw8t_openshift-marketplace(33689f3c-1867-4707-a8c2-ed56c467cff6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:43 crc kubenswrapper[4984]: E0130 10:14:43.078792 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-njw8t" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.850167 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w4cgz" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.850286 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-njw8t" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.975781 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.975959 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8p7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vzmvg_openshift-marketplace(ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.977459 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vzmvg" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.980073 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.980189 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxf87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dc27n_openshift-marketplace(94ba287c-b444-471f-8be9-e1c553ee251e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 10:14:45 crc kubenswrapper[4984]: E0130 10:14:45.982164 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dc27n" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.735988 4984 generic.go:334] "Generic (PLEG): container finished" podID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" exitCode=0 Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.736116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5"} Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.741968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerStarted","Data":"00adcb88ac65cc04f03ee235cbeb2898a181dfd1a92ff87c6400fa8e0bbbb37e"} Jan 30 10:14:46 crc kubenswrapper[4984]: E0130 10:14:46.744492 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vzmvg" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" Jan 30 10:14:46 crc kubenswrapper[4984]: E0130 10:14:46.744491 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dc27n" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" Jan 30 10:14:46 crc kubenswrapper[4984]: I0130 10:14:46.768560 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=36.768540866 podStartE2EDuration="36.768540866s" podCreationTimestamp="2026-01-30 10:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:46.766028341 +0000 UTC m=+191.332332165" watchObservedRunningTime="2026-01-30 10:14:46.768540866 +0000 UTC m=+191.334844690" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.084882 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 10:14:47 crc kubenswrapper[4984]: E0130 10:14:47.085732 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a839741-51fc-4340-8210-3c29bae228c0" containerName="pruner" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.085759 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a839741-51fc-4340-8210-3c29bae228c0" containerName="pruner" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.085991 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a839741-51fc-4340-8210-3c29bae228c0" containerName="pruner" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.086671 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.094355 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.127216 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.128429 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.230341 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.230433 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.230534 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.259919 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.423155 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.753823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerStarted","Data":"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b"} Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.762719 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerDied","Data":"00adcb88ac65cc04f03ee235cbeb2898a181dfd1a92ff87c6400fa8e0bbbb37e"} Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.762541 4984 generic.go:334] "Generic (PLEG): container finished" podID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerID="00adcb88ac65cc04f03ee235cbeb2898a181dfd1a92ff87c6400fa8e0bbbb37e" exitCode=0 Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.784598 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9zx8" podStartSLOduration=2.508570189 podStartE2EDuration="42.784581337s" podCreationTimestamp="2026-01-30 10:14:05 +0000 UTC" firstStartedPulling="2026-01-30 10:14:06.909718556 +0000 UTC m=+151.476022370" lastFinishedPulling="2026-01-30 10:14:47.185729694 +0000 UTC m=+191.752033518" observedRunningTime="2026-01-30 10:14:47.780589201 +0000 UTC m=+192.346893035" watchObservedRunningTime="2026-01-30 10:14:47.784581337 +0000 UTC m=+192.350885161" Jan 30 10:14:47 crc kubenswrapper[4984]: I0130 10:14:47.837113 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 10:14:48 crc kubenswrapper[4984]: I0130 10:14:48.770526 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerStarted","Data":"640fce3d236a1edd0ddde718a8e4c32e8e166331100d7cc7ae5d57097d7bd06b"} Jan 30 10:14:48 crc kubenswrapper[4984]: I0130 10:14:48.770584 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerStarted","Data":"3bcbd0feb40dcbec6dc99fdf79e3898936bbf9ed6f46113ea34ca9ab4e951e60"} Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.125191 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.142558 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.142543541 podStartE2EDuration="2.142543541s" podCreationTimestamp="2026-01-30 10:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:48.793681102 +0000 UTC m=+193.359984936" watchObservedRunningTime="2026-01-30 10:14:49.142543541 +0000 UTC m=+193.708847365" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.254677 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") pod \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.255028 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") pod \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\" (UID: \"cb28ac48-0559-44f0-b620-ad0eae3e3efb\") " Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.254819 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb28ac48-0559-44f0-b620-ad0eae3e3efb" (UID: "cb28ac48-0559-44f0-b620-ad0eae3e3efb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.255404 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.260707 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb28ac48-0559-44f0-b620-ad0eae3e3efb" (UID: "cb28ac48-0559-44f0-b620-ad0eae3e3efb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.356413 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb28ac48-0559-44f0-b620-ad0eae3e3efb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.775601 4984 generic.go:334] "Generic (PLEG): container finished" podID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerID="640fce3d236a1edd0ddde718a8e4c32e8e166331100d7cc7ae5d57097d7bd06b" exitCode=0 Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.775639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerDied","Data":"640fce3d236a1edd0ddde718a8e4c32e8e166331100d7cc7ae5d57097d7bd06b"} Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.778084 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cb28ac48-0559-44f0-b620-ad0eae3e3efb","Type":"ContainerDied","Data":"dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81"} Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.778110 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4d1fb31be3ca7dc70b84261a999d5fc62445c44e9015af22b3dd50a37eed81" Jan 30 10:14:49 crc kubenswrapper[4984]: I0130 10:14:49.778165 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.034625 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.177575 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") pod \"086e0d49-45d0-4b49-9f2c-19a1863521d0\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.178729 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") pod \"086e0d49-45d0-4b49-9f2c-19a1863521d0\" (UID: \"086e0d49-45d0-4b49-9f2c-19a1863521d0\") " Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.179097 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "086e0d49-45d0-4b49-9f2c-19a1863521d0" (UID: "086e0d49-45d0-4b49-9f2c-19a1863521d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.186609 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "086e0d49-45d0-4b49-9f2c-19a1863521d0" (UID: "086e0d49-45d0-4b49-9f2c-19a1863521d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.280119 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/086e0d49-45d0-4b49-9f2c-19a1863521d0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.280157 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/086e0d49-45d0-4b49-9f2c-19a1863521d0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.789036 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"086e0d49-45d0-4b49-9f2c-19a1863521d0","Type":"ContainerDied","Data":"3bcbd0feb40dcbec6dc99fdf79e3898936bbf9ed6f46113ea34ca9ab4e951e60"} Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.789082 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcbd0feb40dcbec6dc99fdf79e3898936bbf9ed6f46113ea34ca9ab4e951e60" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.789082 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.870604 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 10:14:51 crc kubenswrapper[4984]: E0130 10:14:51.870892 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.870924 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: E0130 10:14:51.870938 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.870944 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.871101 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="086e0d49-45d0-4b49-9f2c-19a1863521d0" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.871120 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb28ac48-0559-44f0-b620-ad0eae3e3efb" containerName="pruner" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.871616 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.873557 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.873795 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.880209 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.888046 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.888103 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.888212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.988849 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.988977 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.989002 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.989050 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:51 crc kubenswrapper[4984]: I0130 10:14:51.989051 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.007952 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.206225 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.657772 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 10:14:52 crc kubenswrapper[4984]: W0130 10:14:52.658629 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6f61aac1_18eb_4615_958d_b52a11645afb.slice/crio-2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853 WatchSource:0}: Error finding container 2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853: Status 404 returned error can't find the container with id 2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853 Jan 30 10:14:52 crc kubenswrapper[4984]: I0130 10:14:52.796548 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerStarted","Data":"2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853"} Jan 30 10:14:53 crc kubenswrapper[4984]: I0130 10:14:53.803512 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerStarted","Data":"1ece5995ec1cb186ea0589ac48611a00d40c00849c2709d52ee48a8bf55e2079"} Jan 30 10:14:53 crc kubenswrapper[4984]: I0130 10:14:53.819607 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.8195911970000003 podStartE2EDuration="2.819591197s" podCreationTimestamp="2026-01-30 10:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:14:53.816587565 +0000 UTC m=+198.382891399" watchObservedRunningTime="2026-01-30 10:14:53.819591197 +0000 UTC m=+198.385895011" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.577952 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.579088 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.741025 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.851005 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:55 crc kubenswrapper[4984]: I0130 10:14:55.967767 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:56 crc kubenswrapper[4984]: I0130 10:14:56.474911 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:14:57 crc kubenswrapper[4984]: I0130 10:14:57.827558 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9zx8" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" containerID="cri-o://f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" gracePeriod=2 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.449663 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.596667 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") pod \"46e81fe4-3beb-448b-955e-c6db37c85e77\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.596762 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") pod \"46e81fe4-3beb-448b-955e-c6db37c85e77\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.596788 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") pod \"46e81fe4-3beb-448b-955e-c6db37c85e77\" (UID: \"46e81fe4-3beb-448b-955e-c6db37c85e77\") " Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.598743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities" (OuterVolumeSpecName: "utilities") pod "46e81fe4-3beb-448b-955e-c6db37c85e77" (UID: "46e81fe4-3beb-448b-955e-c6db37c85e77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.602005 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8" (OuterVolumeSpecName: "kube-api-access-vh4b8") pod "46e81fe4-3beb-448b-955e-c6db37c85e77" (UID: "46e81fe4-3beb-448b-955e-c6db37c85e77"). InnerVolumeSpecName "kube-api-access-vh4b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.629007 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46e81fe4-3beb-448b-955e-c6db37c85e77" (UID: "46e81fe4-3beb-448b-955e-c6db37c85e77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.698415 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh4b8\" (UniqueName: \"kubernetes.io/projected/46e81fe4-3beb-448b-955e-c6db37c85e77-kube-api-access-vh4b8\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.698452 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.698462 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e81fe4-3beb-448b-955e-c6db37c85e77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.832560 4984 generic.go:334] "Generic (PLEG): container finished" podID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.832640 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.835147 4984 generic.go:334] "Generic (PLEG): container finished" podID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.835208 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.836951 4984 generic.go:334] "Generic (PLEG): container finished" podID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.836992 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.837054 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9zx8" event={"ID":"46e81fe4-3beb-448b-955e-c6db37c85e77","Type":"ContainerDied","Data":"a250f9ff70a0c171ec7466406231de39859f7a4b3ffd9f714f62126a8f50f17b"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.837014 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9zx8" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.837079 4984 scope.go:117] "RemoveContainer" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.838775 4984 generic.go:334] "Generic (PLEG): container finished" podID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" exitCode=0 Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.838823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74"} Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.919326 4984 scope.go:117] "RemoveContainer" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.920813 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.923435 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9zx8"] Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.946762 4984 scope.go:117] "RemoveContainer" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961007 4984 scope.go:117] "RemoveContainer" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" Jan 30 10:14:58 crc kubenswrapper[4984]: E0130 10:14:58.961429 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b\": container with ID starting with f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b not found: ID does not exist" containerID="f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961473 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b"} err="failed to get container status \"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b\": rpc error: code = NotFound desc = could not find container \"f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b\": container with ID starting with f775a547bdaa9aba35f78f978a6c80d8435f1a93a2b42d45950d9e1261963c6b not found: ID does not exist" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961605 4984 scope.go:117] "RemoveContainer" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" Jan 30 10:14:58 crc kubenswrapper[4984]: E0130 10:14:58.961893 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5\": container with ID starting with fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5 not found: ID does not exist" containerID="fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961918 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5"} err="failed to get container status \"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5\": rpc error: code = NotFound desc = could not find container \"fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5\": container with ID starting with fe9f18186c24d244f552cba225313a16e8b525da295ed33c259b326817479ab5 not found: ID does not exist" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.961933 4984 scope.go:117] "RemoveContainer" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" Jan 30 10:14:58 crc kubenswrapper[4984]: E0130 10:14:58.962614 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943\": container with ID starting with f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943 not found: ID does not exist" containerID="f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943" Jan 30 10:14:58 crc kubenswrapper[4984]: I0130 10:14:58.962646 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943"} err="failed to get container status \"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943\": rpc error: code = NotFound desc = could not find container \"f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943\": container with ID starting with f4120e272db360a0a93a3ac4841ad78c0b718f118ea386ff43dfb6de400a7943 not found: ID does not exist" Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.846493 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerStarted","Data":"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.847883 4984 generic.go:334] "Generic (PLEG): container finished" podID="b628557d-490d-4803-8ae3-fde88678c6a4" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" exitCode=0 Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.847990 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.849682 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerStarted","Data":"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.853200 4984 generic.go:334] "Generic (PLEG): container finished" podID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" exitCode=0 Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.853325 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.857768 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerStarted","Data":"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb"} Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.868638 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vv7r" podStartSLOduration=2.347205198 podStartE2EDuration="55.868615083s" podCreationTimestamp="2026-01-30 10:14:04 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.81071372 +0000 UTC m=+150.377017544" lastFinishedPulling="2026-01-30 10:14:59.332123605 +0000 UTC m=+203.898427429" observedRunningTime="2026-01-30 10:14:59.867169184 +0000 UTC m=+204.433473008" watchObservedRunningTime="2026-01-30 10:14:59.868615083 +0000 UTC m=+204.434918917" Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.903391 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dk77x" podStartSLOduration=3.366592599 podStartE2EDuration="56.903374313s" podCreationTimestamp="2026-01-30 10:14:03 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.84400522 +0000 UTC m=+150.410309044" lastFinishedPulling="2026-01-30 10:14:59.380786934 +0000 UTC m=+203.947090758" observedRunningTime="2026-01-30 10:14:59.885745081 +0000 UTC m=+204.452048905" watchObservedRunningTime="2026-01-30 10:14:59.903374313 +0000 UTC m=+204.469678137" Jan 30 10:14:59 crc kubenswrapper[4984]: I0130 10:14:59.940642 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8cnkg" podStartSLOduration=3.3656946899999998 podStartE2EDuration="57.940628151s" podCreationTimestamp="2026-01-30 10:14:02 +0000 UTC" firstStartedPulling="2026-01-30 10:14:04.726273559 +0000 UTC m=+149.292577383" lastFinishedPulling="2026-01-30 10:14:59.30120702 +0000 UTC m=+203.867510844" observedRunningTime="2026-01-30 10:14:59.939288634 +0000 UTC m=+204.505592458" watchObservedRunningTime="2026-01-30 10:14:59.940628151 +0000 UTC m=+204.506931975" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.097026 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" path="/var/lib/kubelet/pods/46e81fe4-3beb-448b-955e-c6db37c85e77/volumes" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137046 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 10:15:00 crc kubenswrapper[4984]: E0130 10:15:00.137424 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-utilities" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137446 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-utilities" Jan 30 10:15:00 crc kubenswrapper[4984]: E0130 10:15:00.137464 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-content" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137473 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="extract-content" Jan 30 10:15:00 crc kubenswrapper[4984]: E0130 10:15:00.137490 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137498 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.137710 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e81fe4-3beb-448b-955e-c6db37c85e77" containerName="registry-server" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.138205 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.140054 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.140054 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.147548 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.317145 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.317194 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.317238 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.418188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.418261 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.418287 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.419234 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.425953 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.438390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"collect-profiles-29496135-d89wg\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.452664 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.689923 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.865081 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerStarted","Data":"626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.865130 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerStarted","Data":"b89ef05138b4f7b740fca5d924b385306f0c0d92ee704c96db15d26be98e3344"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.867681 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerStarted","Data":"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.869986 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerStarted","Data":"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc"} Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.883550 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" podStartSLOduration=0.883531894 podStartE2EDuration="883.531894ms" podCreationTimestamp="2026-01-30 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:15:00.882892876 +0000 UTC m=+205.449196710" watchObservedRunningTime="2026-01-30 10:15:00.883531894 +0000 UTC m=+205.449835718" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.902140 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vzmvg" podStartSLOduration=2.5689887909999998 podStartE2EDuration="54.902119142s" podCreationTimestamp="2026-01-30 10:14:06 +0000 UTC" firstStartedPulling="2026-01-30 10:14:07.952951739 +0000 UTC m=+152.519255563" lastFinishedPulling="2026-01-30 10:15:00.28608208 +0000 UTC m=+204.852385914" observedRunningTime="2026-01-30 10:15:00.899095769 +0000 UTC m=+205.465399593" watchObservedRunningTime="2026-01-30 10:15:00.902119142 +0000 UTC m=+205.468422976" Jan 30 10:15:00 crc kubenswrapper[4984]: I0130 10:15:00.918996 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w4cgz" podStartSLOduration=4.533354052 podStartE2EDuration="58.918979572s" podCreationTimestamp="2026-01-30 10:14:02 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.834549379 +0000 UTC m=+150.400853203" lastFinishedPulling="2026-01-30 10:15:00.220174899 +0000 UTC m=+204.786478723" observedRunningTime="2026-01-30 10:15:00.916104734 +0000 UTC m=+205.482408568" watchObservedRunningTime="2026-01-30 10:15:00.918979572 +0000 UTC m=+205.485283406" Jan 30 10:15:01 crc kubenswrapper[4984]: I0130 10:15:01.883824 4984 generic.go:334] "Generic (PLEG): container finished" podID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerID="626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b" exitCode=0 Jan 30 10:15:01 crc kubenswrapper[4984]: I0130 10:15:01.883927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerDied","Data":"626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b"} Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.892204 4984 generic.go:334] "Generic (PLEG): container finished" podID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" exitCode=0 Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.892232 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9"} Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.895834 4984 generic.go:334] "Generic (PLEG): container finished" podID="94ba287c-b444-471f-8be9-e1c553ee251e" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" exitCode=0 Jan 30 10:15:02 crc kubenswrapper[4984]: I0130 10:15:02.895919 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.000784 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.000851 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.000903 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.001484 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.001546 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e" gracePeriod=600 Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.156848 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.156905 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.179865 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.195977 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.354553 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") pod \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.355169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") pod \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.355355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") pod \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\" (UID: \"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9\") " Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.356086 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" (UID: "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.359186 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" (UID: "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.359386 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4" (OuterVolumeSpecName: "kube-api-access-pq7r4") pod "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" (UID: "c5144eb3-3db1-4164-9dc1-51afa4ca6ac9"). InnerVolumeSpecName "kube-api-access-pq7r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.456757 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.456802 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.456818 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7r4\" (UniqueName: \"kubernetes.io/projected/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9-kube-api-access-pq7r4\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.548265 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.548312 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.625004 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.681186 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.681984 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.734767 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.903022 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e" exitCode=0 Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.903102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.903142 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.905114 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" event={"ID":"c5144eb3-3db1-4164-9dc1-51afa4ca6ac9","Type":"ContainerDied","Data":"b89ef05138b4f7b740fca5d924b385306f0c0d92ee704c96db15d26be98e3344"} Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.905167 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89ef05138b4f7b740fca5d924b385306f0c0d92ee704c96db15d26be98e3344" Jan 30 10:15:03 crc kubenswrapper[4984]: I0130 10:15:03.905718 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.260576 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.262392 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.315513 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:05 crc kubenswrapper[4984]: I0130 10:15:05.954065 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.921451 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerStarted","Data":"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc"} Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.922804 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.922841 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.923017 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerStarted","Data":"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d"} Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.940625 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njw8t" podStartSLOduration=3.952958945 podStartE2EDuration="1m3.94060594s" podCreationTimestamp="2026-01-30 10:14:03 +0000 UTC" firstStartedPulling="2026-01-30 10:14:05.813036929 +0000 UTC m=+150.379340753" lastFinishedPulling="2026-01-30 10:15:05.800683924 +0000 UTC m=+210.366987748" observedRunningTime="2026-01-30 10:15:06.93839967 +0000 UTC m=+211.504703484" watchObservedRunningTime="2026-01-30 10:15:06.94060594 +0000 UTC m=+211.506909764" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.962079 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dc27n" podStartSLOduration=3.167813278 podStartE2EDuration="1m0.962063136s" podCreationTimestamp="2026-01-30 10:14:06 +0000 UTC" firstStartedPulling="2026-01-30 10:14:08.026425932 +0000 UTC m=+152.592729756" lastFinishedPulling="2026-01-30 10:15:05.82067577 +0000 UTC m=+210.386979614" observedRunningTime="2026-01-30 10:15:06.959545047 +0000 UTC m=+211.525848871" watchObservedRunningTime="2026-01-30 10:15:06.962063136 +0000 UTC m=+211.528366960" Jan 30 10:15:06 crc kubenswrapper[4984]: I0130 10:15:06.966708 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:07 crc kubenswrapper[4984]: I0130 10:15:07.968440 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:10 crc kubenswrapper[4984]: I0130 10:15:10.369480 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:15:10 crc kubenswrapper[4984]: I0130 10:15:10.370133 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vzmvg" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" containerID="cri-o://5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" gracePeriod=2 Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.756489 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.870385 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") pod \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.870522 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") pod \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.870561 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") pod \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\" (UID: \"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe\") " Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.871587 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities" (OuterVolumeSpecName: "utilities") pod "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" (UID: "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.882507 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t" (OuterVolumeSpecName: "kube-api-access-z8p7t") pod "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" (UID: "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe"). InnerVolumeSpecName "kube-api-access-z8p7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953779 4984 generic.go:334] "Generic (PLEG): container finished" podID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" exitCode=0 Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953849 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e"} Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953896 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzmvg" event={"ID":"ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe","Type":"ContainerDied","Data":"885308cdee1788f90d4d4127dbee72d7e0f92f6a85267488120f7976aafba455"} Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.953927 4984 scope.go:117] "RemoveContainer" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.954304 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzmvg" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.972732 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.972776 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8p7t\" (UniqueName: \"kubernetes.io/projected/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-kube-api-access-z8p7t\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.976217 4984 scope.go:117] "RemoveContainer" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" Jan 30 10:15:11 crc kubenswrapper[4984]: I0130 10:15:11.995105 4984 scope.go:117] "RemoveContainer" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.018107 4984 scope.go:117] "RemoveContainer" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" Jan 30 10:15:12 crc kubenswrapper[4984]: E0130 10:15:12.018663 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e\": container with ID starting with 5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e not found: ID does not exist" containerID="5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.018716 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e"} err="failed to get container status \"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e\": rpc error: code = NotFound desc = could not find container \"5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e\": container with ID starting with 5d1f576cc8aff8bd6ddc747018f188ed63b0f6bf530c68839424e75b4968778e not found: ID does not exist" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.018743 4984 scope.go:117] "RemoveContainer" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" Jan 30 10:15:12 crc kubenswrapper[4984]: E0130 10:15:12.019092 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab\": container with ID starting with 1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab not found: ID does not exist" containerID="1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.019124 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab"} err="failed to get container status \"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab\": rpc error: code = NotFound desc = could not find container \"1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab\": container with ID starting with 1f0fc9e2bf6453251b84f2aaefb4f25c4972a6f218cabcd2b6204a21ed4361ab not found: ID does not exist" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.019145 4984 scope.go:117] "RemoveContainer" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" Jan 30 10:15:12 crc kubenswrapper[4984]: E0130 10:15:12.019451 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6\": container with ID starting with 3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6 not found: ID does not exist" containerID="3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6" Jan 30 10:15:12 crc kubenswrapper[4984]: I0130 10:15:12.019487 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6"} err="failed to get container status \"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6\": rpc error: code = NotFound desc = could not find container \"3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6\": container with ID starting with 3a600376ff0848def6f3715a9d7b42e43f93d3a95fb141657932d28f84f796d6 not found: ID does not exist" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.035599 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" (UID: "ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.085986 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.197951 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.201180 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vzmvg"] Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.211034 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.599286 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.731000 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.737885 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.737929 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:13 crc kubenswrapper[4984]: I0130 10:15:13.799655 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:14 crc kubenswrapper[4984]: I0130 10:15:14.032442 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:14 crc kubenswrapper[4984]: I0130 10:15:14.100396 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" path="/var/lib/kubelet/pods/ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe/volumes" Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.572433 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.573023 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dk77x" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" containerID="cri-o://1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" gracePeriod=2 Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.968104 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990054 4984 generic.go:334] "Generic (PLEG): container finished" podID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" exitCode=0 Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b"} Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990159 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dk77x" event={"ID":"874a87b2-c81a-4ce9-85c6-c41d18835f35","Type":"ContainerDied","Data":"d4e11ff82c245260cc3e822769044f1912b9d1350f3d2065bba100f06e3b43f2"} Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990183 4984 scope.go:117] "RemoveContainer" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" Jan 30 10:15:15 crc kubenswrapper[4984]: I0130 10:15:15.990410 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dk77x" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.008317 4984 scope.go:117] "RemoveContainer" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.024544 4984 scope.go:117] "RemoveContainer" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.045971 4984 scope.go:117] "RemoveContainer" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" Jan 30 10:15:16 crc kubenswrapper[4984]: E0130 10:15:16.046487 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b\": container with ID starting with 1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b not found: ID does not exist" containerID="1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.046637 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b"} err="failed to get container status \"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b\": rpc error: code = NotFound desc = could not find container \"1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b\": container with ID starting with 1d808e6853c6b65124d0d55e3bed2fb56209dbf0d21ef973d5f6e846d494e71b not found: ID does not exist" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.046685 4984 scope.go:117] "RemoveContainer" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" Jan 30 10:15:16 crc kubenswrapper[4984]: E0130 10:15:16.046993 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051\": container with ID starting with 0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051 not found: ID does not exist" containerID="0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.047024 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051"} err="failed to get container status \"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051\": rpc error: code = NotFound desc = could not find container \"0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051\": container with ID starting with 0dc3428eb0baffa520ffce1d0e7f02cd63e44b43332540f123b1d21c7056a051 not found: ID does not exist" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.047038 4984 scope.go:117] "RemoveContainer" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" Jan 30 10:15:16 crc kubenswrapper[4984]: E0130 10:15:16.047327 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7\": container with ID starting with de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7 not found: ID does not exist" containerID="de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.047371 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7"} err="failed to get container status \"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7\": rpc error: code = NotFound desc = could not find container \"de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7\": container with ID starting with de1d483c8f22a358a90b8dc472cff5e8e4d2eecee8223e7986352026552fa5c7 not found: ID does not exist" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.127517 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") pod \"874a87b2-c81a-4ce9-85c6-c41d18835f35\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.127566 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") pod \"874a87b2-c81a-4ce9-85c6-c41d18835f35\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.127743 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") pod \"874a87b2-c81a-4ce9-85c6-c41d18835f35\" (UID: \"874a87b2-c81a-4ce9-85c6-c41d18835f35\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.128963 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities" (OuterVolumeSpecName: "utilities") pod "874a87b2-c81a-4ce9-85c6-c41d18835f35" (UID: "874a87b2-c81a-4ce9-85c6-c41d18835f35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.134204 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4" (OuterVolumeSpecName: "kube-api-access-6b9h4") pod "874a87b2-c81a-4ce9-85c6-c41d18835f35" (UID: "874a87b2-c81a-4ce9-85c6-c41d18835f35"). InnerVolumeSpecName "kube-api-access-6b9h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.173158 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.173491 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njw8t" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" containerID="cri-o://d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" gracePeriod=2 Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.208906 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "874a87b2-c81a-4ce9-85c6-c41d18835f35" (UID: "874a87b2-c81a-4ce9-85c6-c41d18835f35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.230305 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9h4\" (UniqueName: \"kubernetes.io/projected/874a87b2-c81a-4ce9-85c6-c41d18835f35-kube-api-access-6b9h4\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.230363 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.230424 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874a87b2-c81a-4ce9-85c6-c41d18835f35-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.329418 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.333290 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dk77x"] Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.536882 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.585142 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.585294 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.635189 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") pod \"33689f3c-1867-4707-a8c2-ed56c467cff6\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.635297 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") pod \"33689f3c-1867-4707-a8c2-ed56c467cff6\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.635402 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") pod \"33689f3c-1867-4707-a8c2-ed56c467cff6\" (UID: \"33689f3c-1867-4707-a8c2-ed56c467cff6\") " Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.636751 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities" (OuterVolumeSpecName: "utilities") pod "33689f3c-1867-4707-a8c2-ed56c467cff6" (UID: "33689f3c-1867-4707-a8c2-ed56c467cff6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.637078 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.645708 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k" (OuterVolumeSpecName: "kube-api-access-9fd8k") pod "33689f3c-1867-4707-a8c2-ed56c467cff6" (UID: "33689f3c-1867-4707-a8c2-ed56c467cff6"). InnerVolumeSpecName "kube-api-access-9fd8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.707349 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33689f3c-1867-4707-a8c2-ed56c467cff6" (UID: "33689f3c-1867-4707-a8c2-ed56c467cff6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.740011 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.740074 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fd8k\" (UniqueName: \"kubernetes.io/projected/33689f3c-1867-4707-a8c2-ed56c467cff6-kube-api-access-9fd8k\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:16 crc kubenswrapper[4984]: I0130 10:15:16.740088 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33689f3c-1867-4707-a8c2-ed56c467cff6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007480 4984 generic.go:334] "Generic (PLEG): container finished" podID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" exitCode=0 Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007624 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njw8t" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007711 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc"} Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007767 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njw8t" event={"ID":"33689f3c-1867-4707-a8c2-ed56c467cff6","Type":"ContainerDied","Data":"b26139038cf2bdec7d270d289e5224326f6cb2004f0e33cd6645f7d045b4467b"} Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.007803 4984 scope.go:117] "RemoveContainer" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.042535 4984 scope.go:117] "RemoveContainer" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.054193 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.057729 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njw8t"] Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.069576 4984 scope.go:117] "RemoveContainer" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.070564 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.090441 4984 scope.go:117] "RemoveContainer" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" Jan 30 10:15:17 crc kubenswrapper[4984]: E0130 10:15:17.094619 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc\": container with ID starting with d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc not found: ID does not exist" containerID="d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.094760 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc"} err="failed to get container status \"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc\": rpc error: code = NotFound desc = could not find container \"d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc\": container with ID starting with d60435ed82486c6f4e2628647748b28882668a41f9de09f9732d16e7d006fcbc not found: ID does not exist" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.094803 4984 scope.go:117] "RemoveContainer" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" Jan 30 10:15:17 crc kubenswrapper[4984]: E0130 10:15:17.097283 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9\": container with ID starting with ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9 not found: ID does not exist" containerID="ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.097343 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9"} err="failed to get container status \"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9\": rpc error: code = NotFound desc = could not find container \"ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9\": container with ID starting with ed7a33ea794361da4e5641d8edfc764b7ea8991a409de98b519f34fe6e2efbd9 not found: ID does not exist" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.097370 4984 scope.go:117] "RemoveContainer" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" Jan 30 10:15:17 crc kubenswrapper[4984]: E0130 10:15:17.098943 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26\": container with ID starting with b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26 not found: ID does not exist" containerID="b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26" Jan 30 10:15:17 crc kubenswrapper[4984]: I0130 10:15:17.099230 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26"} err="failed to get container status \"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26\": rpc error: code = NotFound desc = could not find container \"b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26\": container with ID starting with b2a4494a43180d77c5ecaa6be87e6c7145ae295d17d4a25f634df0f0fc9fef26 not found: ID does not exist" Jan 30 10:15:18 crc kubenswrapper[4984]: I0130 10:15:18.101870 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" path="/var/lib/kubelet/pods/33689f3c-1867-4707-a8c2-ed56c467cff6/volumes" Jan 30 10:15:18 crc kubenswrapper[4984]: I0130 10:15:18.104211 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" path="/var/lib/kubelet/pods/874a87b2-c81a-4ce9-85c6-c41d18835f35/volumes" Jan 30 10:15:21 crc kubenswrapper[4984]: I0130 10:15:21.499308 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" containerID="cri-o://cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" gracePeriod=15 Jan 30 10:15:21 crc kubenswrapper[4984]: I0130 10:15:21.912477 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019005 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019045 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019080 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019114 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019133 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019152 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019191 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019213 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019228 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.019581 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020157 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020214 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020313 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020314 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020372 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020407 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020428 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") pod \"b78342ea-bd31-48b3-b052-638da558730c\" (UID: \"b78342ea-bd31-48b3-b052-638da558730c\") " Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020619 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020631 4984 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b78342ea-bd31-48b3-b052-638da558730c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020639 4984 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020648 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.020966 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.025957 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.026040 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9" (OuterVolumeSpecName: "kube-api-access-znjx9") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "kube-api-access-znjx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.026910 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.027234 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.028939 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.029237 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.033667 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035554 4984 generic.go:334] "Generic (PLEG): container finished" podID="b78342ea-bd31-48b3-b052-638da558730c" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" exitCode=0 Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerDied","Data":"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61"} Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035617 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" event={"ID":"b78342ea-bd31-48b3-b052-638da558730c","Type":"ContainerDied","Data":"22894fd3f7185098bfb82595039c231f2f5583d91c055ff95ffbf8f516afcd2e"} Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035634 4984 scope.go:117] "RemoveContainer" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.035731 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-59vj6" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.038551 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.040461 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b78342ea-bd31-48b3-b052-638da558730c" (UID: "b78342ea-bd31-48b3-b052-638da558730c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.091852 4984 scope.go:117] "RemoveContainer" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" Jan 30 10:15:22 crc kubenswrapper[4984]: E0130 10:15:22.092583 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61\": container with ID starting with cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61 not found: ID does not exist" containerID="cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.092985 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61"} err="failed to get container status \"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61\": rpc error: code = NotFound desc = could not find container \"cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61\": container with ID starting with cb111b2afb7d7008ae0fa8430ca4cbe13b2f0d05356f1f32fc995e5d392e9a61 not found: ID does not exist" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121691 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121723 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121733 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121743 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121752 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121761 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znjx9\" (UniqueName: \"kubernetes.io/projected/b78342ea-bd31-48b3-b052-638da558730c-kube-api-access-znjx9\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121773 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121783 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121791 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.121799 4984 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b78342ea-bd31-48b3-b052-638da558730c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.350824 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:15:22 crc kubenswrapper[4984]: I0130 10:15:22.358515 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-59vj6"] Jan 30 10:15:24 crc kubenswrapper[4984]: I0130 10:15:24.096076 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78342ea-bd31-48b3-b052-638da558730c" path="/var/lib/kubelet/pods/b78342ea-bd31-48b3-b052-638da558730c/volumes" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.380557 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5969b76fdc-qf4wv"] Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381711 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381742 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381766 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381782 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381807 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381824 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381850 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381864 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381884 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381901 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-content" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381920 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381935 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381955 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.381970 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.381994 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382010 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.382026 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382042 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.382059 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerName="collect-profiles" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382074 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerName="collect-profiles" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.382112 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382128 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="extract-utilities" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382380 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="33689f3c-1867-4707-a8c2-ed56c467cff6" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382412 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="874a87b2-c81a-4ce9-85c6-c41d18835f35" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382444 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78342ea-bd31-48b3-b052-638da558730c" containerName="oauth-openshift" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382463 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" containerName="collect-profiles" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.382487 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdbdcd6-0816-4dc6-bdb2-e3088376d3fe" containerName="registry-server" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.383205 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.385795 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.385929 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.385956 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.392093 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.392368 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.392603 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.393040 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.393748 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.394689 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.395227 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.399546 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5969b76fdc-qf4wv"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.400281 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.400330 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.406506 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.419437 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.429632 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436046 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-dir\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436096 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436117 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436159 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-login\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-policies\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436200 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436263 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-session\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436296 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436316 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75xw\" (UniqueName: \"kubernetes.io/projected/32ac01b6-bb42-436f-bddf-fb35fbeff725-kube-api-access-v75xw\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436355 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-router-certs\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436380 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-error\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.436476 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-service-ca\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537500 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-router-certs\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537609 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-error\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537662 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537711 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-service-ca\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537753 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-dir\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537843 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537892 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537927 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-login\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.537967 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-policies\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538035 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538095 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-session\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538204 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v75xw\" (UniqueName: \"kubernetes.io/projected/32ac01b6-bb42-436f-bddf-fb35fbeff725-kube-api-access-v75xw\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.538482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-dir\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.539357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.539406 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-service-ca\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.540981 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-audit-policies\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.541901 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.545090 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.545693 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.545764 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.546074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-login\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.546557 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-session\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.547172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.552668 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-user-template-error\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.553680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/32ac01b6-bb42-436f-bddf-fb35fbeff725-v4-0-config-system-router-certs\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.559571 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75xw\" (UniqueName: \"kubernetes.io/projected/32ac01b6-bb42-436f-bddf-fb35fbeff725-kube-api-access-v75xw\") pod \"oauth-openshift-5969b76fdc-qf4wv\" (UID: \"32ac01b6-bb42-436f-bddf-fb35fbeff725\") " pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.703834 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.807808 4984 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.808929 4984 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.808961 4984 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809085 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809165 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809179 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809191 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809200 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809214 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809240 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809272 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809281 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809292 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809301 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809338 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809347 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809359 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809370 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 10:15:30 crc kubenswrapper[4984]: E0130 10:15:30.809380 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809409 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809553 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809722 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809777 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809857 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809924 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809903 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" gracePeriod=15 Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.809950 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810046 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810068 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810092 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810114 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.810129 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.816170 4984 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.843663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.843905 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.843987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844129 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844231 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.844301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.948837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.948965 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.949944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950058 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950123 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950167 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950198 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950263 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950309 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950385 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950443 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:30 crc kubenswrapper[4984]: I0130 10:15:30.950489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.100280 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.102724 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104232 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104315 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104358 4984 scope.go:117] "RemoveContainer" containerID="3d7a95fd4fc9bc38727724a823d4f489a4715009223f13fb2b93adba2474f6e5" Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104336 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.104397 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" exitCode=2 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.107121 4984 generic.go:334] "Generic (PLEG): container finished" podID="6f61aac1-18eb-4615-958d-b52a11645afb" containerID="1ece5995ec1cb186ea0589ac48611a00d40c00849c2709d52ee48a8bf55e2079" exitCode=0 Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.107170 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerDied","Data":"1ece5995ec1cb186ea0589ac48611a00d40c00849c2709d52ee48a8bf55e2079"} Jan 30 10:15:31 crc kubenswrapper[4984]: I0130 10:15:31.108017 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467231 4984 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 10:15:31 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:31 crc kubenswrapper[4984]: > Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467432 4984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 10:15:31 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:31 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467506 4984 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 10:15:31 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:31 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.467635 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb\\\" Netns:\\\"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s\\\": dial tcp 38.102.83.169:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:31 crc kubenswrapper[4984]: E0130 10:15:31.468422 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event=< Jan 30 10:15:31 crc kubenswrapper[4984]: &Event{ObjectMeta:{oauth-openshift-5969b76fdc-qf4wv.188f7abd2aedd9df openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-5969b76fdc-qf4wv,UID:32ac01b6-bb42-436f-bddf-fb35fbeff725,APIVersion:v1,ResourceVersion:29552,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:31 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,LastTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 10:15:31 crc kubenswrapper[4984]: > Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.121004 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.122415 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.123309 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.162787 4984 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" volumeName="registry-storage" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.446710 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.447874 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473322 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") pod \"6f61aac1-18eb-4615-958d-b52a11645afb\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473397 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") pod \"6f61aac1-18eb-4615-958d-b52a11645afb\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473394 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock" (OuterVolumeSpecName: "var-lock") pod "6f61aac1-18eb-4615-958d-b52a11645afb" (UID: "6f61aac1-18eb-4615-958d-b52a11645afb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473424 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") pod \"6f61aac1-18eb-4615-958d-b52a11645afb\" (UID: \"6f61aac1-18eb-4615-958d-b52a11645afb\") " Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f61aac1-18eb-4615-958d-b52a11645afb" (UID: "6f61aac1-18eb-4615-958d-b52a11645afb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473717 4984 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.473727 4984 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f61aac1-18eb-4615-958d-b52a11645afb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.481465 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f61aac1-18eb-4615-958d-b52a11645afb" (UID: "6f61aac1-18eb-4615-958d-b52a11645afb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:15:32 crc kubenswrapper[4984]: I0130 10:15:32.575926 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f61aac1-18eb-4615-958d-b52a11645afb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848126 4984 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 10:15:32 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88" Netns:"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:32 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:32 crc kubenswrapper[4984]: > Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848230 4984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 10:15:32 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88" Netns:"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:32 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:32 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848331 4984 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 10:15:32 crc kubenswrapper[4984]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88" Netns:"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:32 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 10:15:32 crc kubenswrapper[4984]: > pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:32 crc kubenswrapper[4984]: E0130 10:15:32.848445 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88\\\" Netns:\\\"/var/run/netns/947bfe5a-7595-451b-b5d0-3903e0cd1bbb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=c4d742e2d1baac1d17662e2794767b17e819d359dedbe958569cab2e1cdd5b88;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s\\\": dial tcp 38.102.83.169:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.133987 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f61aac1-18eb-4615-958d-b52a11645afb","Type":"ContainerDied","Data":"2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853"} Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.134346 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5e6ce21fd1ec67e2914cc6c6c0228b95028e76ed2eb39d9c3defacf99df853" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.134126 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.173766 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.180365 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.181779 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.182490 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.182933 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.295502 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.295617 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.295658 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.296102 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.296163 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.296161 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.397882 4984 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.398157 4984 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:33 crc kubenswrapper[4984]: I0130 10:15:33.398279 4984 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:15:33 crc kubenswrapper[4984]: E0130 10:15:33.834678 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event=< Jan 30 10:15:33 crc kubenswrapper[4984]: &Event{ObjectMeta:{oauth-openshift-5969b76fdc-qf4wv.188f7abd2aedd9df openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-5969b76fdc-qf4wv,UID:32ac01b6-bb42-436f-bddf-fb35fbeff725,APIVersion:v1,ResourceVersion:29552,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:33 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,LastTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 10:15:33 crc kubenswrapper[4984]: > Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.102125 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.143850 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.145010 4984 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" exitCode=0 Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.145102 4984 scope.go:117] "RemoveContainer" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.145156 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.146637 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.147299 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.150212 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.150604 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.161900 4984 scope.go:117] "RemoveContainer" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.177463 4984 scope.go:117] "RemoveContainer" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.197100 4984 scope.go:117] "RemoveContainer" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.213236 4984 scope.go:117] "RemoveContainer" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.230080 4984 scope.go:117] "RemoveContainer" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.252447 4984 scope.go:117] "RemoveContainer" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.254936 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\": container with ID starting with 7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733 not found: ID does not exist" containerID="7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.254975 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733"} err="failed to get container status \"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\": rpc error: code = NotFound desc = could not find container \"7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733\": container with ID starting with 7f193519fc9d506a857ef54365ced14c117b9f2ab659f330ede19a742f15d733 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255003 4984 scope.go:117] "RemoveContainer" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.255472 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\": container with ID starting with 9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0 not found: ID does not exist" containerID="9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255512 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0"} err="failed to get container status \"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\": rpc error: code = NotFound desc = could not find container \"9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0\": container with ID starting with 9228a9038efcde549bddec90aec9ec9ae0ff75e97065daf71ebdb4b11e5c51d0 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255537 4984 scope.go:117] "RemoveContainer" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.255917 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\": container with ID starting with 77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b not found: ID does not exist" containerID="77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255950 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b"} err="failed to get container status \"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\": rpc error: code = NotFound desc = could not find container \"77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b\": container with ID starting with 77b6b73104c3dbc173d7b19427ca988246116b58b9056a41724cb1653a96909b not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.255970 4984 scope.go:117] "RemoveContainer" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.256382 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\": container with ID starting with 969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9 not found: ID does not exist" containerID="969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.256435 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9"} err="failed to get container status \"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\": rpc error: code = NotFound desc = could not find container \"969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9\": container with ID starting with 969495cd5c458f800325e0bdba4211d4958d07fa8b32bee6ddf1855ff67386c9 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.256462 4984 scope.go:117] "RemoveContainer" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.257017 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\": container with ID starting with 73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434 not found: ID does not exist" containerID="73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.257047 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434"} err="failed to get container status \"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\": rpc error: code = NotFound desc = could not find container \"73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434\": container with ID starting with 73f9a9d5b7bb2e22a63fbbfb3c915f733ace8ff778a0a83aeb67e54c384e4434 not found: ID does not exist" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.257065 4984 scope.go:117] "RemoveContainer" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" Jan 30 10:15:34 crc kubenswrapper[4984]: E0130 10:15:34.257345 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\": container with ID starting with f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb not found: ID does not exist" containerID="f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb" Jan 30 10:15:34 crc kubenswrapper[4984]: I0130 10:15:34.257366 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb"} err="failed to get container status \"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\": rpc error: code = NotFound desc = could not find container \"f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb\": container with ID starting with f8bb47148c2de89c464d41f21e6ea21397b73a43013ca523f17b134df0eefdbb not found: ID does not exist" Jan 30 10:15:35 crc kubenswrapper[4984]: E0130 10:15:35.841849 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:35 crc kubenswrapper[4984]: I0130 10:15:35.842493 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:35 crc kubenswrapper[4984]: W0130 10:15:35.885986 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea WatchSource:0}: Error finding container d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea: Status 404 returned error can't find the container with id d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.092195 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.093367 4984 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.156406 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d89100c9f175973942392d2ebd25e3a9a9602b0676402e1260cbee825faef2ea"} Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.478958 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.479410 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.479742 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.480220 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.480697 4984 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:36 crc kubenswrapper[4984]: I0130 10:15:36.480744 4984 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.481185 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Jan 30 10:15:36 crc kubenswrapper[4984]: E0130 10:15:36.682138 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Jan 30 10:15:37 crc kubenswrapper[4984]: E0130 10:15:37.083297 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Jan 30 10:15:37 crc kubenswrapper[4984]: I0130 10:15:37.162243 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124"} Jan 30 10:15:37 crc kubenswrapper[4984]: I0130 10:15:37.163560 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:37 crc kubenswrapper[4984]: E0130 10:15:37.163583 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:37 crc kubenswrapper[4984]: E0130 10:15:37.885289 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Jan 30 10:15:38 crc kubenswrapper[4984]: E0130 10:15:38.171383 4984 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:15:39 crc kubenswrapper[4984]: E0130 10:15:39.486469 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Jan 30 10:15:42 crc kubenswrapper[4984]: E0130 10:15:42.688454 4984 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="6.4s" Jan 30 10:15:43 crc kubenswrapper[4984]: E0130 10:15:43.835931 4984 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event=< Jan 30 10:15:43 crc kubenswrapper[4984]: &Event{ObjectMeta:{oauth-openshift-5969b76fdc-qf4wv.188f7abd2aedd9df openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-5969b76fdc-qf4wv,UID:32ac01b6-bb42-436f-bddf-fb35fbeff725,APIVersion:v1,ResourceVersion:29552,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5969b76fdc-qf4wv_openshift-authentication_32ac01b6-bb42-436f-bddf-fb35fbeff725_0(7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb): error adding pod openshift-authentication_oauth-openshift-5969b76fdc-qf4wv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb" Netns:"/var/run/netns/b824e528-4cff-4a0c-9295-0fdefae51e7d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5969b76fdc-qf4wv;K8S_POD_INFRA_CONTAINER_ID=7898844b2602efb8a16b0c81c7e956356b9fb76d2f245568d4e062baba05dbcb;K8S_POD_UID=32ac01b6-bb42-436f-bddf-fb35fbeff725" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv] networking: Multus: [openshift-authentication/oauth-openshift-5969b76fdc-qf4wv/32ac01b6-bb42-436f-bddf-fb35fbeff725]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-5969b76fdc-qf4wv in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5969b76fdc-qf4wv?timeout=1m0s": dial tcp 38.102.83.169:6443: connect: connection refused Jan 30 10:15:43 crc kubenswrapper[4984]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,LastTimestamp:2026-01-30 10:15:31.467532767 +0000 UTC m=+236.033836631,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 30 10:15:43 crc kubenswrapper[4984]: > Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.089850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.090730 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.104718 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.104756 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:44 crc kubenswrapper[4984]: E0130 10:15:44.105296 4984 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.105855 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:44 crc kubenswrapper[4984]: I0130 10:15:44.215663 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d307620d0884870a6e672247167a1f63c4395bee47bcf37596ddd47ebdccb2cd"} Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.231809 4984 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5945a1207df9f18a832f4361622a41a77f812af5137b5d1bece2b50590183fd5" exitCode=0 Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.231948 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5945a1207df9f18a832f4361622a41a77f812af5137b5d1bece2b50590183fd5"} Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.232476 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.232513 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.232812 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:45 crc kubenswrapper[4984]: E0130 10:15:45.233197 4984 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.235698 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.235778 4984 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862" exitCode=1 Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.235826 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862"} Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.236486 4984 scope.go:117] "RemoveContainer" containerID="7af960d59e7b3c4a25eddde95e273d5ed259be6b0948cb1514768b36a52f7862" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.236737 4984 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:45 crc kubenswrapper[4984]: I0130 10:15:45.237168 4984 status_manager.go:851] "Failed to get status for pod" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"382501c4a9401198b12f43f1c9871b331a3acaf16059b28153538e52464a9877"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249346 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"433924f8dbe65ee693927f99578aed74afe176943f52ca2997a63c77fce15fcd"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249358 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9cdf289a1a4dc8784fd66d006c9a99bdd1ae05d195d7ab3badf10f5ba3600bc3"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.249367 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6617ef6221db6ef99ee4d32c9c576b8835c442339f3a57e01f579889db682bee"} Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.261657 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 10:15:46 crc kubenswrapper[4984]: I0130 10:15:46.261718 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"386ac0e2da549d47f83372e76dbab9d7655bd295ed742327a255bc5959332337"} Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270331 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35d2d9606fba5f728c664118be2207d529b11e6f6b587f0f7e1e022706dd6c71"} Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270668 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270684 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:47 crc kubenswrapper[4984]: I0130 10:15:47.270882 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.089641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.090163 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.250087 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.254152 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:15:48 crc kubenswrapper[4984]: I0130 10:15:48.275568 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.106694 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.106971 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.112384 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.281942 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerStarted","Data":"3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a"} Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.282389 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerStarted","Data":"b12452f91f4422c135c699a751411ee22c9b5c803805d3b922fa747b5824deb1"} Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.282757 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.286473 4984 patch_prober.go:28] interesting pod/oauth-openshift-5969b76fdc-qf4wv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Jan 30 10:15:49 crc kubenswrapper[4984]: I0130 10:15:49.286531 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.292199 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/0.log" Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.292299 4984 generic.go:334] "Generic (PLEG): container finished" podID="32ac01b6-bb42-436f-bddf-fb35fbeff725" containerID="3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a" exitCode=255 Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.292347 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerDied","Data":"3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a"} Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.293748 4984 scope.go:117] "RemoveContainer" containerID="3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a" Jan 30 10:15:50 crc kubenswrapper[4984]: I0130 10:15:50.704394 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.300101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/1.log" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301148 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/0.log" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301185 4984 generic.go:334] "Generic (PLEG): container finished" podID="32ac01b6-bb42-436f-bddf-fb35fbeff725" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" exitCode=255 Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301211 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerDied","Data":"b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644"} Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301242 4984 scope.go:117] "RemoveContainer" containerID="3489a2d08752959f92cdd8873e40fd0468ba9b5590a2db0c6ac5f715b2b6b25a" Jan 30 10:15:51 crc kubenswrapper[4984]: I0130 10:15:51.301679 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:15:51 crc kubenswrapper[4984]: E0130 10:15:51.301943 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.280754 4984 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.307684 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/1.log" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.308153 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:15:52 crc kubenswrapper[4984]: E0130 10:15:52.308339 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.308760 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.308853 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.312803 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:15:52 crc kubenswrapper[4984]: I0130 10:15:52.331531 4984 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68f6fa21-dccc-4f13-a98b-983b556e4c18" Jan 30 10:15:53 crc kubenswrapper[4984]: I0130 10:15:53.315033 4984 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:53 crc kubenswrapper[4984]: I0130 10:15:53.315082 4984 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a410aa98-4908-4c3d-bca6-f7e056916e10" Jan 30 10:15:56 crc kubenswrapper[4984]: I0130 10:15:56.108567 4984 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="68f6fa21-dccc-4f13-a98b-983b556e4c18" Jan 30 10:16:00 crc kubenswrapper[4984]: I0130 10:16:00.704224 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:00 crc kubenswrapper[4984]: I0130 10:16:00.704962 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:00 crc kubenswrapper[4984]: I0130 10:16:00.705812 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:16:00 crc kubenswrapper[4984]: E0130 10:16:00.706194 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5969b76fdc-qf4wv_openshift-authentication(32ac01b6-bb42-436f-bddf-fb35fbeff725)\"" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podUID="32ac01b6-bb42-436f-bddf-fb35fbeff725" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.541859 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.567633 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.817549 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 10:16:02 crc kubenswrapper[4984]: I0130 10:16:02.863897 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.235895 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.263786 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.526353 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.575589 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.731042 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.745667 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.853159 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 10:16:03 crc kubenswrapper[4984]: I0130 10:16:03.938131 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.492846 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.679751 4984 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.819840 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.937043 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.939933 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.975521 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.977235 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 10:16:04 crc kubenswrapper[4984]: I0130 10:16:04.996333 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.033878 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.198508 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.201028 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.247827 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.306096 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.353823 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.374716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.468775 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.478272 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.499340 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.527771 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.547129 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.657755 4984 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.754716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.822909 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 10:16:05 crc kubenswrapper[4984]: I0130 10:16:05.949407 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.103091 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.121855 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.148722 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.197174 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.201086 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.330157 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.370572 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.381332 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.540108 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.559039 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.580476 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.596237 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.621907 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.683499 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.688490 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.801300 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.827940 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.868196 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.876212 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.900469 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.933569 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 10:16:06 crc kubenswrapper[4984]: I0130 10:16:06.941747 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.021178 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.096467 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.108427 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.109622 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.211203 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.341866 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.439951 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.498295 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.624855 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.695619 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.734119 4984 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.796103 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.870216 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.947627 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 10:16:07 crc kubenswrapper[4984]: I0130 10:16:07.987425 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.003803 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.176716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.213889 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.239772 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.317639 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.351727 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.371029 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.371038 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.378885 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.386062 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.410164 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.431567 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.473634 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.483816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.503666 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.616622 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.885958 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 10:16:08 crc kubenswrapper[4984]: I0130 10:16:08.921244 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.147951 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.278512 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.386334 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.472121 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.558364 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.587598 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.642681 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.645033 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.871060 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.925445 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.956621 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.979400 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 10:16:09 crc kubenswrapper[4984]: I0130 10:16:09.989026 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.014663 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.132959 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.200508 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.232493 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.289924 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.304902 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.308984 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.348828 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.358032 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.403011 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.431182 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.466648 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.473717 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.608168 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.743873 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.803452 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.837214 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.853201 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.854658 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.883584 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.950373 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 10:16:10 crc kubenswrapper[4984]: I0130 10:16:10.999207 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.090364 4984 scope.go:117] "RemoveContainer" containerID="b4ae129124cce1a560daec0df2fd72777e5c2a199f7288c6d30b2e31ee098644" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.108722 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.286458 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.302860 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.309661 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.419823 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.433888 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5969b76fdc-qf4wv_32ac01b6-bb42-436f-bddf-fb35fbeff725/oauth-openshift/1.log" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.433951 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" event={"ID":"32ac01b6-bb42-436f-bddf-fb35fbeff725","Type":"ContainerStarted","Data":"65c333ae11e5607a3622044f3dd1aa65c8b84cfc261be44144dddfe71d8679ab"} Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.434300 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.442261 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.517397 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.580381 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.590392 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.603490 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.615169 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.685826 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.713033 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.764109 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.775018 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.777466 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.782113 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.791999 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.805217 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.849803 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.851915 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.909784 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 10:16:11 crc kubenswrapper[4984]: I0130 10:16:11.979842 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.024493 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.031415 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.056802 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.103537 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.127540 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.241122 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.267199 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.297044 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.319113 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.380372 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.423557 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.525006 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.540989 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.573668 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.643922 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.687361 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.711679 4984 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.712076 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5969b76fdc-qf4wv" podStartSLOduration=76.71205666 podStartE2EDuration="1m16.71205666s" podCreationTimestamp="2026-01-30 10:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:11.471269838 +0000 UTC m=+276.037573662" watchObservedRunningTime="2026-01-30 10:16:12.71205666 +0000 UTC m=+277.278360484" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.715977 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.716025 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.716041 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5969b76fdc-qf4wv"] Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.721846 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.739286 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.7392731 podStartE2EDuration="20.7392731s" podCreationTimestamp="2026-01-30 10:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:12.737598614 +0000 UTC m=+277.303902468" watchObservedRunningTime="2026-01-30 10:16:12.7392731 +0000 UTC m=+277.305576924" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.765816 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 10:16:12 crc kubenswrapper[4984]: I0130 10:16:12.984687 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.006178 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.015606 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.017852 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.051928 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.114195 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.160581 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.166683 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.234309 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.293112 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.476391 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.514688 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.533879 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.629407 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.630430 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.719696 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.796450 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.801614 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.821454 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.859091 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.931109 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 10:16:13 crc kubenswrapper[4984]: I0130 10:16:13.947294 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.003138 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.033206 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.057629 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.068888 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.317805 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.324408 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.412159 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.452976 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.462440 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.497700 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.554496 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.640006 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.659010 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.722693 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.788724 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.813763 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.827665 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.837636 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.859816 4984 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.860233 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" gracePeriod=5 Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.867436 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 10:16:14 crc kubenswrapper[4984]: I0130 10:16:14.967813 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.005121 4984 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.044518 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.096319 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.152728 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.234097 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.271380 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.442450 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.504934 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.510290 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.511382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.626291 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.644116 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.776213 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.808432 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 10:16:15 crc kubenswrapper[4984]: I0130 10:16:15.924998 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.022423 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.049890 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.055384 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.062370 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.104737 4984 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.197784 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.202130 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.295537 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.308744 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.368793 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.490858 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.524174 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.641060 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.943055 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.969025 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 10:16:16 crc kubenswrapper[4984]: I0130 10:16:16.990340 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.090890 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.173294 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.292111 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.345687 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.441076 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.447027 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.447361 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 10:16:17 crc kubenswrapper[4984]: I0130 10:16:17.570353 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 10:16:18 crc kubenswrapper[4984]: I0130 10:16:18.118441 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 10:16:18 crc kubenswrapper[4984]: I0130 10:16:18.583794 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 10:16:18 crc kubenswrapper[4984]: I0130 10:16:18.933972 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 10:16:19 crc kubenswrapper[4984]: I0130 10:16:19.107907 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.451839 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.451912 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485213 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485305 4984 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" exitCode=137 Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485362 4984 scope.go:117] "RemoveContainer" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.485470 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.506531 4984 scope.go:117] "RemoveContainer" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" Jan 30 10:16:20 crc kubenswrapper[4984]: E0130 10:16:20.507067 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124\": container with ID starting with 510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124 not found: ID does not exist" containerID="510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.507121 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124"} err="failed to get container status \"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124\": rpc error: code = NotFound desc = could not find container \"510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124\": container with ID starting with 510a9d19cca5a91d0963921a9ccbebf93cacba8a9ce7dea2b066a550e4bf4124 not found: ID does not exist" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607040 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607189 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607285 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607321 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607351 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607349 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607411 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607446 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607476 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607858 4984 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607915 4984 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607936 4984 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.607954 4984 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.618143 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.708952 4984 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:20 crc kubenswrapper[4984]: I0130 10:16:20.870667 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:16:22 crc kubenswrapper[4984]: I0130 10:16:22.070908 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 10:16:22 crc kubenswrapper[4984]: I0130 10:16:22.097108 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 10:16:34 crc kubenswrapper[4984]: I0130 10:16:34.578074 4984 generic.go:334] "Generic (PLEG): container finished" podID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" exitCode=0 Jan 30 10:16:34 crc kubenswrapper[4984]: I0130 10:16:34.578193 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerDied","Data":"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7"} Jan 30 10:16:34 crc kubenswrapper[4984]: I0130 10:16:34.578907 4984 scope.go:117] "RemoveContainer" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.586915 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerStarted","Data":"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9"} Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.587996 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.588922 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:16:35 crc kubenswrapper[4984]: I0130 10:16:35.887770 4984 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.593461 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.595296 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" containerID="cri-o://04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" gracePeriod=30 Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.693459 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.694307 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" containerID="cri-o://92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" gracePeriod=30 Jan 30 10:16:39 crc kubenswrapper[4984]: I0130 10:16:39.993300 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.035025 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.078234 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079086 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079105 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079119 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079126 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079134 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079140 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.079153 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" containerName="installer" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079159 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" containerName="installer" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079262 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerName="controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079275 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079286 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f934f289-4896-49e7-b0ad-12222ed44137" containerName="route-controller-manager" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079295 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f61aac1-18eb-4615-958d-b52a11645afb" containerName="installer" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.079648 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.085355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.086455 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.087614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.087643 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.087695 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") pod \"f03e3054-ba21-45c6-8cbd-786eb7eac685\" (UID: \"f03e3054-ba21-45c6-8cbd-786eb7eac685\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.088684 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.091446 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz" (OuterVolumeSpecName: "kube-api-access-lhskz") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "kube-api-access-lhskz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.092069 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.092291 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca" (OuterVolumeSpecName: "client-ca") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.095655 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config" (OuterVolumeSpecName: "config") pod "f03e3054-ba21-45c6-8cbd-786eb7eac685" (UID: "f03e3054-ba21-45c6-8cbd-786eb7eac685"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.106635 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.192524 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.192595 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193309 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca" (OuterVolumeSpecName: "client-ca") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193371 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193689 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") pod \"f934f289-4896-49e7-b0ad-12222ed44137\" (UID: \"f934f289-4896-49e7-b0ad-12222ed44137\") " Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193819 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193895 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.193971 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194167 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194209 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194222 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhskz\" (UniqueName: \"kubernetes.io/projected/f03e3054-ba21-45c6-8cbd-786eb7eac685-kube-api-access-lhskz\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194237 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f03e3054-ba21-45c6-8cbd-786eb7eac685-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194271 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194281 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f03e3054-ba21-45c6-8cbd-786eb7eac685-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.194743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config" (OuterVolumeSpecName: "config") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.196837 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.197122 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg" (OuterVolumeSpecName: "kube-api-access-t67jg") pod "f934f289-4896-49e7-b0ad-12222ed44137" (UID: "f934f289-4896-49e7-b0ad-12222ed44137"). InnerVolumeSpecName "kube-api-access-t67jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.294878 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.294938 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.294989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295077 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295175 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f934f289-4896-49e7-b0ad-12222ed44137-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295197 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f934f289-4896-49e7-b0ad-12222ed44137-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.295216 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t67jg\" (UniqueName: \"kubernetes.io/projected/f934f289-4896-49e7-b0ad-12222ed44137-kube-api-access-t67jg\") on node \"crc\" DevicePath \"\"" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.296233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.296344 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.300478 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.326910 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"route-controller-manager-76b5f4d9cd-ngrtv\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.395929 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.611851 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616036 4984 generic.go:334] "Generic (PLEG): container finished" podID="f03e3054-ba21-45c6-8cbd-786eb7eac685" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" exitCode=0 Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616102 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616124 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerDied","Data":"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616163 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5sdnz" event={"ID":"f03e3054-ba21-45c6-8cbd-786eb7eac685","Type":"ContainerDied","Data":"ea7973a6b7aeb56d77b3657c44c45b40105b1dffec897b668fde3fd406ab2c03"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.616192 4984 scope.go:117] "RemoveContainer" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.621903 4984 generic.go:334] "Generic (PLEG): container finished" podID="f934f289-4896-49e7-b0ad-12222ed44137" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" exitCode=0 Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.621944 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerDied","Data":"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.621968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" event={"ID":"f934f289-4896-49e7-b0ad-12222ed44137","Type":"ContainerDied","Data":"5d2a7595aa7be4a2d24c3db3a03ceede193b8f38eb6567b569e38559c698d2a9"} Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.622020 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.638220 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.643112 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5sdnz"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.653239 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.659595 4984 scope.go:117] "RemoveContainer" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.660070 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142\": container with ID starting with 04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142 not found: ID does not exist" containerID="04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.660106 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142"} err="failed to get container status \"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142\": rpc error: code = NotFound desc = could not find container \"04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142\": container with ID starting with 04ff8a1f2a9feaf912549fa075c64cc85cd053796a01754dd3bcacad7cf35142 not found: ID does not exist" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.660154 4984 scope.go:117] "RemoveContainer" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.664347 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6xww"] Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.675487 4984 scope.go:117] "RemoveContainer" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" Jan 30 10:16:40 crc kubenswrapper[4984]: E0130 10:16:40.676038 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607\": container with ID starting with 92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607 not found: ID does not exist" containerID="92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607" Jan 30 10:16:40 crc kubenswrapper[4984]: I0130 10:16:40.676081 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607"} err="failed to get container status \"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607\": rpc error: code = NotFound desc = could not find container \"92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607\": container with ID starting with 92e6f502a3cf4e36122fce758bc64e012a74210abebb861eb9257a9fc5ec4607 not found: ID does not exist" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.419511 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.420338 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.422590 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.423946 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.425403 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.426847 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.427102 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.427420 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.433530 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.440066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508186 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508582 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508717 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508860 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.508965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.609858 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.609932 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.609980 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.610013 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.610056 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.610978 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.611901 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.612950 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.618938 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.630391 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerStarted","Data":"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c"} Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.630633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerStarted","Data":"365e334180612a639f5ba661049874fb4f2f877225cb9d8766b3099b7bb63022"} Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.631339 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.640023 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"controller-manager-fd87549dd-8dn24\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.640416 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.658346 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" podStartSLOduration=1.658325652 podStartE2EDuration="1.658325652s" podCreationTimestamp="2026-01-30 10:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:41.650549007 +0000 UTC m=+306.216852891" watchObservedRunningTime="2026-01-30 10:16:41.658325652 +0000 UTC m=+306.224629486" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.747360 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:41 crc kubenswrapper[4984]: I0130 10:16:41.990028 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.117657 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03e3054-ba21-45c6-8cbd-786eb7eac685" path="/var/lib/kubelet/pods/f03e3054-ba21-45c6-8cbd-786eb7eac685/volumes" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.118539 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f934f289-4896-49e7-b0ad-12222ed44137" path="/var/lib/kubelet/pods/f934f289-4896-49e7-b0ad-12222ed44137/volumes" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.638627 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerStarted","Data":"8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce"} Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.639836 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.639985 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerStarted","Data":"441e7a7744b46b2ece6c079718d1396613b0f164a34f5b2b2ecf871a33435b67"} Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.643103 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:16:42 crc kubenswrapper[4984]: I0130 10:16:42.662394 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" podStartSLOduration=3.662371437 podStartE2EDuration="3.662371437s" podCreationTimestamp="2026-01-30 10:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:16:42.657823902 +0000 UTC m=+307.224127726" watchObservedRunningTime="2026-01-30 10:16:42.662371437 +0000 UTC m=+307.228675271" Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.596858 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.598333 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" containerID="cri-o://8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce" gracePeriod=30 Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.736010 4984 generic.go:334] "Generic (PLEG): container finished" podID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerID="8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce" exitCode=0 Jan 30 10:16:59 crc kubenswrapper[4984]: I0130 10:16:59.736048 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerDied","Data":"8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce"} Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.218464 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.376733 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.377556 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.378697 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.378944 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.379166 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.379906 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") pod \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\" (UID: \"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4\") " Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.380473 4984 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.380796 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca" (OuterVolumeSpecName: "client-ca") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.381084 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config" (OuterVolumeSpecName: "config") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.383516 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg" (OuterVolumeSpecName: "kube-api-access-cpgdg") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "kube-api-access-cpgdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.384581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" (UID: "09beb5d7-e657-4d93-abdb-5aa5f9d9cce4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481848 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481906 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481953 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpgdg\" (UniqueName: \"kubernetes.io/projected/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-kube-api-access-cpgdg\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.481977 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.742568 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" event={"ID":"09beb5d7-e657-4d93-abdb-5aa5f9d9cce4","Type":"ContainerDied","Data":"441e7a7744b46b2ece6c079718d1396613b0f164a34f5b2b2ecf871a33435b67"} Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.742596 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fd87549dd-8dn24" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.743509 4984 scope.go:117] "RemoveContainer" containerID="8100b14d807fb9ba6b3eb4affe26b263c6fd5dfd33bfbe2f6187b74e12d761ce" Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.780877 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:17:00 crc kubenswrapper[4984]: I0130 10:17:00.787934 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fd87549dd-8dn24"] Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.432774 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7696df588c-pl652"] Jan 30 10:17:01 crc kubenswrapper[4984]: E0130 10:17:01.433021 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.433037 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.433139 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" containerName="controller-manager" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.433522 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.436708 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.436904 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.437663 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.437892 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.438030 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.441654 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.448391 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7696df588c-pl652"] Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.451979 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594784 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-client-ca\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594847 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb277\" (UniqueName: \"kubernetes.io/projected/3bb375f7-22cc-4552-9c4b-49cb9ced2000-kube-api-access-qb277\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594889 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb375f7-22cc-4552-9c4b-49cb9ced2000-serving-cert\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.594931 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-config\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.595059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-proxy-ca-bundles\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696128 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-client-ca\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696222 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb277\" (UniqueName: \"kubernetes.io/projected/3bb375f7-22cc-4552-9c4b-49cb9ced2000-kube-api-access-qb277\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696313 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb375f7-22cc-4552-9c4b-49cb9ced2000-serving-cert\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696397 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-config\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.696476 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-proxy-ca-bundles\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.697771 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-config\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.697916 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-client-ca\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.698133 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bb375f7-22cc-4552-9c4b-49cb9ced2000-proxy-ca-bundles\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.700961 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb375f7-22cc-4552-9c4b-49cb9ced2000-serving-cert\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.725926 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb277\" (UniqueName: \"kubernetes.io/projected/3bb375f7-22cc-4552-9c4b-49cb9ced2000-kube-api-access-qb277\") pod \"controller-manager-7696df588c-pl652\" (UID: \"3bb375f7-22cc-4552-9c4b-49cb9ced2000\") " pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.772887 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:01 crc kubenswrapper[4984]: I0130 10:17:01.987378 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7696df588c-pl652"] Jan 30 10:17:02 crc kubenswrapper[4984]: W0130 10:17:01.994759 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bb375f7_22cc_4552_9c4b_49cb9ced2000.slice/crio-92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f WatchSource:0}: Error finding container 92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f: Status 404 returned error can't find the container with id 92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.097517 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09beb5d7-e657-4d93-abdb-5aa5f9d9cce4" path="/var/lib/kubelet/pods/09beb5d7-e657-4d93-abdb-5aa5f9d9cce4/volumes" Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.759432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" event={"ID":"3bb375f7-22cc-4552-9c4b-49cb9ced2000","Type":"ContainerStarted","Data":"70b6f513b6f674fb1913a7bb58c3c0f49b34088d5d5dfbdd67fc4ce8126acfdd"} Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.759511 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" event={"ID":"3bb375f7-22cc-4552-9c4b-49cb9ced2000","Type":"ContainerStarted","Data":"92c9e96e1230fbf405299979cfe939ba870f85b807467a452974b4ff911dbc6f"} Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.759893 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.766135 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" Jan 30 10:17:02 crc kubenswrapper[4984]: I0130 10:17:02.780935 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7696df588c-pl652" podStartSLOduration=3.780905919 podStartE2EDuration="3.780905919s" podCreationTimestamp="2026-01-30 10:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:02.779806818 +0000 UTC m=+327.346110642" watchObservedRunningTime="2026-01-30 10:17:02.780905919 +0000 UTC m=+327.347209743" Jan 30 10:17:03 crc kubenswrapper[4984]: I0130 10:17:03.001387 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:17:03 crc kubenswrapper[4984]: I0130 10:17:03.001467 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.948369 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.953297 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w4cgz" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" containerID="cri-o://860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" gracePeriod=30 Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.957747 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.958099 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8cnkg" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" containerID="cri-o://8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" gracePeriod=30 Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.972508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.972837 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" containerID="cri-o://626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" gracePeriod=30 Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.986391 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:17:09 crc kubenswrapper[4984]: I0130 10:17:09.986660 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9vv7r" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" containerID="cri-o://acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" gracePeriod=30 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.001967 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.002312 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dc27n" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" containerID="cri-o://cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" gracePeriod=30 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.005229 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tttcx"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.006194 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.008177 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tttcx"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.123122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.123224 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-874f4\" (UniqueName: \"kubernetes.io/projected/ed0e4098-37d9-4094-99d0-1892881696ad-kube-api-access-874f4\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.123263 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.223911 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.224007 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-874f4\" (UniqueName: \"kubernetes.io/projected/ed0e4098-37d9-4094-99d0-1892881696ad-kube-api-access-874f4\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.224037 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.225378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.232598 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed0e4098-37d9-4094-99d0-1892881696ad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.240889 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-874f4\" (UniqueName: \"kubernetes.io/projected/ed0e4098-37d9-4094-99d0-1892881696ad-kube-api-access-874f4\") pod \"marketplace-operator-79b997595-tttcx\" (UID: \"ed0e4098-37d9-4094-99d0-1892881696ad\") " pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.402896 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.473238 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.633416 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") pod \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.633614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") pod \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.633659 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") pod \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\" (UID: \"b92a67bb-8407-4e47-9d9a-9d15398d90ed\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.634488 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b92a67bb-8407-4e47-9d9a-9d15398d90ed" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.637881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5" (OuterVolumeSpecName: "kube-api-access-hssw5") pod "b92a67bb-8407-4e47-9d9a-9d15398d90ed" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed"). InnerVolumeSpecName "kube-api-access-hssw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.638061 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b92a67bb-8407-4e47-9d9a-9d15398d90ed" (UID: "b92a67bb-8407-4e47-9d9a-9d15398d90ed"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.654358 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.664550 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.680893 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.685891 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.734310 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") pod \"94ba287c-b444-471f-8be9-e1c553ee251e\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.734499 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") pod \"94ba287c-b444-471f-8be9-e1c553ee251e\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.734523 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") pod \"94ba287c-b444-471f-8be9-e1c553ee251e\" (UID: \"94ba287c-b444-471f-8be9-e1c553ee251e\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735120 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hssw5\" (UniqueName: \"kubernetes.io/projected/b92a67bb-8407-4e47-9d9a-9d15398d90ed-kube-api-access-hssw5\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735158 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735169 4984 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b92a67bb-8407-4e47-9d9a-9d15398d90ed-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.735451 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities" (OuterVolumeSpecName: "utilities") pod "94ba287c-b444-471f-8be9-e1c553ee251e" (UID: "94ba287c-b444-471f-8be9-e1c553ee251e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.736899 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87" (OuterVolumeSpecName: "kube-api-access-hxf87") pod "94ba287c-b444-471f-8be9-e1c553ee251e" (UID: "94ba287c-b444-471f-8be9-e1c553ee251e"). InnerVolumeSpecName "kube-api-access-hxf87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810289 4984 generic.go:334] "Generic (PLEG): container finished" podID="b628557d-490d-4803-8ae3-fde88678c6a4" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810362 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4cgz" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810393 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810441 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4cgz" event={"ID":"b628557d-490d-4803-8ae3-fde88678c6a4","Type":"ContainerDied","Data":"d624716dec815a31dc6fb1b18652f2e1a4591d64f410b4c644c4fb229fcd424e"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.810467 4984 scope.go:117] "RemoveContainer" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.815698 4984 generic.go:334] "Generic (PLEG): container finished" podID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.815867 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.816817 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerDied","Data":"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.816847 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lf7j" event={"ID":"b92a67bb-8407-4e47-9d9a-9d15398d90ed","Type":"ContainerDied","Data":"d50bbcffbf98d16fce57cd7c81f40638192b3cecf76451eac0e5109332dde5b2"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818851 4984 generic.go:334] "Generic (PLEG): container finished" podID="94ba287c-b444-471f-8be9-e1c553ee251e" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818921 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818948 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dc27n" event={"ID":"94ba287c-b444-471f-8be9-e1c553ee251e","Type":"ContainerDied","Data":"1cfc373681b5c8350c56d3afec6bc2d1d3ebd537567e72a8ed20ca5e0ce12d01"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.818924 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dc27n" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829232 4984 generic.go:334] "Generic (PLEG): container finished" podID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829332 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8cnkg" event={"ID":"4aab6e83-8a77-45ad-aa28-fe2c519133fb","Type":"ContainerDied","Data":"ff00561d64a6b687fd04a281bc0b10957180facce6b051e1ab6f63d8c0e3e399"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.829399 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8cnkg" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832399 4984 generic.go:334] "Generic (PLEG): container finished" podID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" exitCode=0 Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832436 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832445 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vv7r" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.832456 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vv7r" event={"ID":"44e02fc4-8da4-4122-bd3a-9b8f9734ec59","Type":"ContainerDied","Data":"4aa1ead20f9be6ec24d5528456caa32578bf134deec9e2dc8d7d858e101255c0"} Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836366 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") pod \"b628557d-490d-4803-8ae3-fde88678c6a4\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836410 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") pod \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836429 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") pod \"b628557d-490d-4803-8ae3-fde88678c6a4\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836516 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") pod \"b628557d-490d-4803-8ae3-fde88678c6a4\" (UID: \"b628557d-490d-4803-8ae3-fde88678c6a4\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836541 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") pod \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836565 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") pod \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836581 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") pod \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836607 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") pod \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\" (UID: \"44e02fc4-8da4-4122-bd3a-9b8f9734ec59\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836631 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") pod \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\" (UID: \"4aab6e83-8a77-45ad-aa28-fe2c519133fb\") " Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836823 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.836835 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxf87\" (UniqueName: \"kubernetes.io/projected/94ba287c-b444-471f-8be9-e1c553ee251e-kube-api-access-hxf87\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.837563 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities" (OuterVolumeSpecName: "utilities") pod "44e02fc4-8da4-4122-bd3a-9b8f9734ec59" (UID: "44e02fc4-8da4-4122-bd3a-9b8f9734ec59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.838498 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities" (OuterVolumeSpecName: "utilities") pod "b628557d-490d-4803-8ae3-fde88678c6a4" (UID: "b628557d-490d-4803-8ae3-fde88678c6a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.840658 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn" (OuterVolumeSpecName: "kube-api-access-q7xqn") pod "4aab6e83-8a77-45ad-aa28-fe2c519133fb" (UID: "4aab6e83-8a77-45ad-aa28-fe2c519133fb"). InnerVolumeSpecName "kube-api-access-q7xqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.840778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw" (OuterVolumeSpecName: "kube-api-access-lgrgw") pod "b628557d-490d-4803-8ae3-fde88678c6a4" (UID: "b628557d-490d-4803-8ae3-fde88678c6a4"). InnerVolumeSpecName "kube-api-access-lgrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.845035 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k" (OuterVolumeSpecName: "kube-api-access-9hs7k") pod "44e02fc4-8da4-4122-bd3a-9b8f9734ec59" (UID: "44e02fc4-8da4-4122-bd3a-9b8f9734ec59"). InnerVolumeSpecName "kube-api-access-9hs7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.848330 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities" (OuterVolumeSpecName: "utilities") pod "4aab6e83-8a77-45ad-aa28-fe2c519133fb" (UID: "4aab6e83-8a77-45ad-aa28-fe2c519133fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.848545 4984 scope.go:117] "RemoveContainer" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.864361 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.868631 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lf7j"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.881593 4984 scope.go:117] "RemoveContainer" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.883470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e02fc4-8da4-4122-bd3a-9b8f9734ec59" (UID: "44e02fc4-8da4-4122-bd3a-9b8f9734ec59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.894637 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b628557d-490d-4803-8ae3-fde88678c6a4" (UID: "b628557d-490d-4803-8ae3-fde88678c6a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.895179 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94ba287c-b444-471f-8be9-e1c553ee251e" (UID: "94ba287c-b444-471f-8be9-e1c553ee251e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.896593 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aab6e83-8a77-45ad-aa28-fe2c519133fb" (UID: "4aab6e83-8a77-45ad-aa28-fe2c519133fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897026 4984 scope.go:117] "RemoveContainer" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.897402 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc\": container with ID starting with 860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc not found: ID does not exist" containerID="860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897430 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc"} err="failed to get container status \"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc\": rpc error: code = NotFound desc = could not find container \"860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc\": container with ID starting with 860d223550b403a852f1347f40b6f7fc1e6caa689c1c9e9a2ecce981f560bbcc not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897450 4984 scope.go:117] "RemoveContainer" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.897752 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6\": container with ID starting with 501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6 not found: ID does not exist" containerID="501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897774 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6"} err="failed to get container status \"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6\": rpc error: code = NotFound desc = could not find container \"501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6\": container with ID starting with 501318fa994a4e42fe8da880a89aa024e6991ff13ad5fad8111f83ee34f675d6 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.897785 4984 scope.go:117] "RemoveContainer" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.898042 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8\": container with ID starting with f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8 not found: ID does not exist" containerID="f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.898060 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8"} err="failed to get container status \"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8\": rpc error: code = NotFound desc = could not find container \"f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8\": container with ID starting with f0e9db9337a4cecf4cbc10a84db016da554d5e961b6984cacd42f02449c5eea8 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.898072 4984 scope.go:117] "RemoveContainer" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.909534 4984 scope.go:117] "RemoveContainer" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920223 4984 scope.go:117] "RemoveContainer" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.920657 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9\": container with ID starting with 626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9 not found: ID does not exist" containerID="626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920697 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9"} err="failed to get container status \"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9\": rpc error: code = NotFound desc = could not find container \"626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9\": container with ID starting with 626666ba8afa2cf1ad066dfc809dc7c0b4e57e154375aa8425e3efcea39b6ed9 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920723 4984 scope.go:117] "RemoveContainer" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:17:10 crc kubenswrapper[4984]: E0130 10:17:10.920972 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7\": container with ID starting with a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7 not found: ID does not exist" containerID="a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.920999 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7"} err="failed to get container status \"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7\": rpc error: code = NotFound desc = could not find container \"a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7\": container with ID starting with a1b8a3dc2aba330d4f55b673e5451da132fae1d863784ad1437a4efa3d7c10e7 not found: ID does not exist" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.921022 4984 scope.go:117] "RemoveContainer" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.935688 4984 scope.go:117] "RemoveContainer" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938377 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938494 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938599 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgrgw\" (UniqueName: \"kubernetes.io/projected/b628557d-490d-4803-8ae3-fde88678c6a4-kube-api-access-lgrgw\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938697 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938798 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7xqn\" (UniqueName: \"kubernetes.io/projected/4aab6e83-8a77-45ad-aa28-fe2c519133fb-kube-api-access-q7xqn\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938897 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hs7k\" (UniqueName: \"kubernetes.io/projected/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-kube-api-access-9hs7k\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.938992 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e02fc4-8da4-4122-bd3a-9b8f9734ec59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.939093 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aab6e83-8a77-45ad-aa28-fe2c519133fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.939195 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b628557d-490d-4803-8ae3-fde88678c6a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.939317 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94ba287c-b444-471f-8be9-e1c553ee251e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.954395 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tttcx"] Jan 30 10:17:10 crc kubenswrapper[4984]: I0130 10:17:10.962882 4984 scope.go:117] "RemoveContainer" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" Jan 30 10:17:10 crc kubenswrapper[4984]: W0130 10:17:10.968095 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded0e4098_37d9_4094_99d0_1892881696ad.slice/crio-f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3 WatchSource:0}: Error finding container f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3: Status 404 returned error can't find the container with id f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3 Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.001683 4984 scope.go:117] "RemoveContainer" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.002237 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d\": container with ID starting with cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d not found: ID does not exist" containerID="cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.002348 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d"} err="failed to get container status \"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d\": rpc error: code = NotFound desc = could not find container \"cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d\": container with ID starting with cfc4e0a05c358d92269d6e5828c917f4294e2cd3834f2eb9aaad8c45fe48d87d not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.002427 4984 scope.go:117] "RemoveContainer" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.002834 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5\": container with ID starting with d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5 not found: ID does not exist" containerID="d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.002942 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5"} err="failed to get container status \"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5\": rpc error: code = NotFound desc = could not find container \"d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5\": container with ID starting with d02f73142df6e20fcca2344508a97ddede85521a0768492bf0dd1eeb2eb715d5 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.003039 4984 scope.go:117] "RemoveContainer" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.003480 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193\": container with ID starting with ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193 not found: ID does not exist" containerID="ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.003618 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193"} err="failed to get container status \"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193\": rpc error: code = NotFound desc = could not find container \"ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193\": container with ID starting with ce0ebb95fee20ecf9909f3bd54b02b4d4d353e749980ea52646740069ff99193 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.003724 4984 scope.go:117] "RemoveContainer" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.020960 4984 scope.go:117] "RemoveContainer" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.044790 4984 scope.go:117] "RemoveContainer" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.068115 4984 scope.go:117] "RemoveContainer" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.069060 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb\": container with ID starting with 8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb not found: ID does not exist" containerID="8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069104 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb"} err="failed to get container status \"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb\": rpc error: code = NotFound desc = could not find container \"8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb\": container with ID starting with 8db38f70d8580da7efd63b4caaf2fef198fa1d7f0eb296a746795e941de004fb not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069129 4984 scope.go:117] "RemoveContainer" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.069542 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74\": container with ID starting with e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74 not found: ID does not exist" containerID="e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069563 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74"} err="failed to get container status \"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74\": rpc error: code = NotFound desc = could not find container \"e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74\": container with ID starting with e3c8efad27a973257dab665d13ccb5ada18e9264331a7a70104b18e3552bae74 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.069575 4984 scope.go:117] "RemoveContainer" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.069954 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba\": container with ID starting with 1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba not found: ID does not exist" containerID="1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.070076 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba"} err="failed to get container status \"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba\": rpc error: code = NotFound desc = could not find container \"1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba\": container with ID starting with 1776b0fcde3259037007883bff0fd3a1bae801219ee25754ed10c7ce5142ebba not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.070179 4984 scope.go:117] "RemoveContainer" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.092033 4984 scope.go:117] "RemoveContainer" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.161522 4984 scope.go:117] "RemoveContainer" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.196328 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.199500 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w4cgz"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.208682 4984 scope.go:117] "RemoveContainer" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.210607 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3\": container with ID starting with acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3 not found: ID does not exist" containerID="acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.210688 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3"} err="failed to get container status \"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3\": rpc error: code = NotFound desc = could not find container \"acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3\": container with ID starting with acb4bb6a4f880274b17f49958048d09ee754ffa36f904943f1dcf312aa51fdf3 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.210734 4984 scope.go:117] "RemoveContainer" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.211768 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8\": container with ID starting with 4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8 not found: ID does not exist" containerID="4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.211801 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8"} err="failed to get container status \"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8\": rpc error: code = NotFound desc = could not find container \"4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8\": container with ID starting with 4c47e9b8a203bf666954e0aaae6b8e8edce8151023c16794f51fe02cbb5cc5b8 not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.211825 4984 scope.go:117] "RemoveContainer" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.212574 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f\": container with ID starting with 2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f not found: ID does not exist" containerID="2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.212596 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f"} err="failed to get container status \"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f\": rpc error: code = NotFound desc = could not find container \"2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f\": container with ID starting with 2dcacd4dcdb409b5df35aa805e7edd89bc4fc3fa44ed8b1e0992bc4ea592871f not found: ID does not exist" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.216562 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.222296 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vv7r"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.232508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.235288 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dc27n"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.247342 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.251243 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8cnkg"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690673 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8prhf"] Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690889 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690905 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690919 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690927 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690939 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690948 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690958 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690967 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690979 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.690988 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.690998 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691007 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691019 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691027 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691041 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691048 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691058 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691067 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691079 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691087 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691097 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691106 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691119 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691127 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="extract-content" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691140 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691150 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="extract-utilities" Jan 30 10:17:11 crc kubenswrapper[4984]: E0130 10:17:11.691165 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691176 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691306 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691322 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691336 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691348 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691361 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" containerName="registry-server" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.691374 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" containerName="marketplace-operator" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.692240 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.694496 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.704546 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8prhf"] Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.840553 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" event={"ID":"ed0e4098-37d9-4094-99d0-1892881696ad","Type":"ContainerStarted","Data":"e66f965fe7aae5cc6c0005cae866c92a7668a150f013c269281c0c6e2318b7d1"} Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.840623 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" event={"ID":"ed0e4098-37d9-4094-99d0-1892881696ad","Type":"ContainerStarted","Data":"f5525d8f51450c2eefc5cc257a387ae7ab1e0412fb838d1e191c80b2a9839bf3"} Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.840871 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.845858 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.851465 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-utilities\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.851511 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxntp\" (UniqueName: \"kubernetes.io/projected/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-kube-api-access-rxntp\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.851711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-catalog-content\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.859899 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tttcx" podStartSLOduration=2.859875179 podStartE2EDuration="2.859875179s" podCreationTimestamp="2026-01-30 10:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:11.856716731 +0000 UTC m=+336.423020555" watchObservedRunningTime="2026-01-30 10:17:11.859875179 +0000 UTC m=+336.426179003" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953233 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-utilities\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953295 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxntp\" (UniqueName: \"kubernetes.io/projected/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-kube-api-access-rxntp\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953349 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-catalog-content\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953929 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-catalog-content\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.953960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-utilities\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:11 crc kubenswrapper[4984]: I0130 10:17:11.975326 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxntp\" (UniqueName: \"kubernetes.io/projected/719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d-kube-api-access-rxntp\") pod \"redhat-marketplace-8prhf\" (UID: \"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d\") " pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.021122 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.099681 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e02fc4-8da4-4122-bd3a-9b8f9734ec59" path="/var/lib/kubelet/pods/44e02fc4-8da4-4122-bd3a-9b8f9734ec59/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.101005 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aab6e83-8a77-45ad-aa28-fe2c519133fb" path="/var/lib/kubelet/pods/4aab6e83-8a77-45ad-aa28-fe2c519133fb/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.102280 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ba287c-b444-471f-8be9-e1c553ee251e" path="/var/lib/kubelet/pods/94ba287c-b444-471f-8be9-e1c553ee251e/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.103715 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b628557d-490d-4803-8ae3-fde88678c6a4" path="/var/lib/kubelet/pods/b628557d-490d-4803-8ae3-fde88678c6a4/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.104581 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92a67bb-8407-4e47-9d9a-9d15398d90ed" path="/var/lib/kubelet/pods/b92a67bb-8407-4e47-9d9a-9d15398d90ed/volumes" Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.437446 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8prhf"] Jan 30 10:17:12 crc kubenswrapper[4984]: W0130 10:17:12.445357 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719f7e0f_9e74_40fe_b2cb_a967e9e0ac4d.slice/crio-7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360 WatchSource:0}: Error finding container 7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360: Status 404 returned error can't find the container with id 7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360 Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.849789 4984 generic.go:334] "Generic (PLEG): container finished" podID="719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d" containerID="806b86815f0eb53cbea203f9a2da1723e2a4b34380c9188a2755ec9e1452e070" exitCode=0 Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.849929 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerDied","Data":"806b86815f0eb53cbea203f9a2da1723e2a4b34380c9188a2755ec9e1452e070"} Jan 30 10:17:12 crc kubenswrapper[4984]: I0130 10:17:12.850402 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerStarted","Data":"7442ca737a4f056be85af907ce3534de794f11b0d92c56bf793e6bc963b45360"} Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.093624 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-47j92"] Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.095203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.097864 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.113887 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47j92"] Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.167848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnn82\" (UniqueName: \"kubernetes.io/projected/24af9dab-3f7a-4433-b367-5ecafcf89754-kube-api-access-rnn82\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.167921 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-utilities\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.167948 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-catalog-content\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.269211 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-utilities\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.269301 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-catalog-content\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.269372 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnn82\" (UniqueName: \"kubernetes.io/projected/24af9dab-3f7a-4433-b367-5ecafcf89754-kube-api-access-rnn82\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.270041 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-utilities\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.270056 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24af9dab-3f7a-4433-b367-5ecafcf89754-catalog-content\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.300249 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnn82\" (UniqueName: \"kubernetes.io/projected/24af9dab-3f7a-4433-b367-5ecafcf89754-kube-api-access-rnn82\") pod \"redhat-operators-47j92\" (UID: \"24af9dab-3f7a-4433-b367-5ecafcf89754\") " pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.414580 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.815234 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47j92"] Jan 30 10:17:13 crc kubenswrapper[4984]: W0130 10:17:13.830423 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24af9dab_3f7a_4433_b367_5ecafcf89754.slice/crio-499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f WatchSource:0}: Error finding container 499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f: Status 404 returned error can't find the container with id 499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.857688 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerStarted","Data":"499248b9cc13117b6e9f73ef395d83e9a817dbf13bfa0401aa60386821d5a80f"} Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.859389 4984 generic.go:334] "Generic (PLEG): container finished" podID="719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d" containerID="dfede5255ad7fa3967d73e48ff3b4bf1c4baf42f585ac700b13bbcdf6a442422" exitCode=0 Jan 30 10:17:13 crc kubenswrapper[4984]: I0130 10:17:13.860368 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerDied","Data":"dfede5255ad7fa3967d73e48ff3b4bf1c4baf42f585ac700b13bbcdf6a442422"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.100357 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zckjp"] Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.102571 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.107816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.130942 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zckjp"] Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.181337 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-utilities\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.181380 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xb5f\" (UniqueName: \"kubernetes.io/projected/c47b45ee-75cf-4e33-bfde-721099cda0a9-kube-api-access-6xb5f\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.181409 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-catalog-content\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283141 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-utilities\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283190 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xb5f\" (UniqueName: \"kubernetes.io/projected/c47b45ee-75cf-4e33-bfde-721099cda0a9-kube-api-access-6xb5f\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283222 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-catalog-content\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-catalog-content\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.283905 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47b45ee-75cf-4e33-bfde-721099cda0a9-utilities\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.307331 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xb5f\" (UniqueName: \"kubernetes.io/projected/c47b45ee-75cf-4e33-bfde-721099cda0a9-kube-api-access-6xb5f\") pod \"certified-operators-zckjp\" (UID: \"c47b45ee-75cf-4e33-bfde-721099cda0a9\") " pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.430857 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.842682 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zckjp"] Jan 30 10:17:14 crc kubenswrapper[4984]: W0130 10:17:14.847491 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc47b45ee_75cf_4e33_bfde_721099cda0a9.slice/crio-ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4 WatchSource:0}: Error finding container ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4: Status 404 returned error can't find the container with id ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4 Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.866673 4984 generic.go:334] "Generic (PLEG): container finished" podID="24af9dab-3f7a-4433-b367-5ecafcf89754" containerID="47ecda0029228306a7b8a47d8f098ce8c53744ce863e056fdfde483e9dd11ca1" exitCode=0 Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.866754 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerDied","Data":"47ecda0029228306a7b8a47d8f098ce8c53744ce863e056fdfde483e9dd11ca1"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.870291 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8prhf" event={"ID":"719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d","Type":"ContainerStarted","Data":"b2a0860734fd0e7d060f07123b107886535e6a72ced4d4cfb58cff4b8639eb7d"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.873357 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerStarted","Data":"ca8a4f06ef762614c07c1e67e3cf6bdd10f9df91ff712dced7da9117bbbfe0c4"} Jan 30 10:17:14 crc kubenswrapper[4984]: I0130 10:17:14.899477 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8prhf" podStartSLOduration=2.452469697 podStartE2EDuration="3.899462935s" podCreationTimestamp="2026-01-30 10:17:11 +0000 UTC" firstStartedPulling="2026-01-30 10:17:12.851345019 +0000 UTC m=+337.417648843" lastFinishedPulling="2026-01-30 10:17:14.298338257 +0000 UTC m=+338.864642081" observedRunningTime="2026-01-30 10:17:14.899455405 +0000 UTC m=+339.465759239" watchObservedRunningTime="2026-01-30 10:17:14.899462935 +0000 UTC m=+339.465766759" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.488058 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hn9gx"] Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.489149 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.494040 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.496119 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn9gx"] Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.600187 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78s52\" (UniqueName: \"kubernetes.io/projected/a725adac-ef1c-400b-bde2-756c97779906-kube-api-access-78s52\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.600320 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-catalog-content\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.600348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-utilities\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.701647 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78s52\" (UniqueName: \"kubernetes.io/projected/a725adac-ef1c-400b-bde2-756c97779906-kube-api-access-78s52\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.701729 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-catalog-content\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.701747 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-utilities\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.702356 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-utilities\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.702601 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a725adac-ef1c-400b-bde2-756c97779906-catalog-content\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.721186 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78s52\" (UniqueName: \"kubernetes.io/projected/a725adac-ef1c-400b-bde2-756c97779906-kube-api-access-78s52\") pod \"community-operators-hn9gx\" (UID: \"a725adac-ef1c-400b-bde2-756c97779906\") " pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.854292 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.881370 4984 generic.go:334] "Generic (PLEG): container finished" podID="c47b45ee-75cf-4e33-bfde-721099cda0a9" containerID="487d175dd5ae0ff0acad6068ead72fb6c51a679c5979ee525cdc7e53c255532c" exitCode=0 Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.881557 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerDied","Data":"487d175dd5ae0ff0acad6068ead72fb6c51a679c5979ee525cdc7e53c255532c"} Jan 30 10:17:15 crc kubenswrapper[4984]: I0130 10:17:15.883738 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerStarted","Data":"90bcea06094aee3cc94b04024be7e2ce3242a3ee4d36474028403aee215e8d9a"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.301022 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hn9gx"] Jan 30 10:17:16 crc kubenswrapper[4984]: W0130 10:17:16.309414 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda725adac_ef1c_400b_bde2_756c97779906.slice/crio-ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a WatchSource:0}: Error finding container ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a: Status 404 returned error can't find the container with id ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.888587 4984 generic.go:334] "Generic (PLEG): container finished" podID="a725adac-ef1c-400b-bde2-756c97779906" containerID="1044e6b90beb26c2b6c1eeb33ee89ad1764ec5bb79add932e4ee2ae5a8ed8506" exitCode=0 Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.888683 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerDied","Data":"1044e6b90beb26c2b6c1eeb33ee89ad1764ec5bb79add932e4ee2ae5a8ed8506"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.889082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerStarted","Data":"ecd172593b8137cc58b6a038819decbdc7ee0a5cf958c2f0bc8b2a544c4bad8a"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.893149 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerStarted","Data":"defacf66931fe0e949441d23986abe04d51219516ee62d967bbdd01e1910c8d2"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.913243 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerDied","Data":"90bcea06094aee3cc94b04024be7e2ce3242a3ee4d36474028403aee215e8d9a"} Jan 30 10:17:16 crc kubenswrapper[4984]: I0130 10:17:16.913366 4984 generic.go:334] "Generic (PLEG): container finished" podID="24af9dab-3f7a-4433-b367-5ecafcf89754" containerID="90bcea06094aee3cc94b04024be7e2ce3242a3ee4d36474028403aee215e8d9a" exitCode=0 Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.921162 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47j92" event={"ID":"24af9dab-3f7a-4433-b367-5ecafcf89754","Type":"ContainerStarted","Data":"5d90c6fc96816176aebf6676207c28fc3c673322d5e57773cfa1c3e61e12fea8"} Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.924866 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerStarted","Data":"ff76d96754dd113685c92e440ae8e38ee1db12b4a22bc41eb4bc0e0619a70c6f"} Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.935627 4984 generic.go:334] "Generic (PLEG): container finished" podID="c47b45ee-75cf-4e33-bfde-721099cda0a9" containerID="defacf66931fe0e949441d23986abe04d51219516ee62d967bbdd01e1910c8d2" exitCode=0 Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.935693 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerDied","Data":"defacf66931fe0e949441d23986abe04d51219516ee62d967bbdd01e1910c8d2"} Jan 30 10:17:17 crc kubenswrapper[4984]: I0130 10:17:17.952209 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-47j92" podStartSLOduration=2.469165467 podStartE2EDuration="4.952192867s" podCreationTimestamp="2026-01-30 10:17:13 +0000 UTC" firstStartedPulling="2026-01-30 10:17:14.86843528 +0000 UTC m=+339.434739104" lastFinishedPulling="2026-01-30 10:17:17.35146265 +0000 UTC m=+341.917766504" observedRunningTime="2026-01-30 10:17:17.946690274 +0000 UTC m=+342.512994118" watchObservedRunningTime="2026-01-30 10:17:17.952192867 +0000 UTC m=+342.518496691" Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.947880 4984 generic.go:334] "Generic (PLEG): container finished" podID="a725adac-ef1c-400b-bde2-756c97779906" containerID="ff76d96754dd113685c92e440ae8e38ee1db12b4a22bc41eb4bc0e0619a70c6f" exitCode=0 Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.947967 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerDied","Data":"ff76d96754dd113685c92e440ae8e38ee1db12b4a22bc41eb4bc0e0619a70c6f"} Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.953349 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zckjp" event={"ID":"c47b45ee-75cf-4e33-bfde-721099cda0a9","Type":"ContainerStarted","Data":"8c70e413c823685c1d3d70a41e1044476c0de319360ad0ad5db6742c34076ae0"} Jan 30 10:17:18 crc kubenswrapper[4984]: I0130 10:17:18.994739 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zckjp" podStartSLOduration=2.516890264 podStartE2EDuration="4.99471038s" podCreationTimestamp="2026-01-30 10:17:14 +0000 UTC" firstStartedPulling="2026-01-30 10:17:15.883081735 +0000 UTC m=+340.449385559" lastFinishedPulling="2026-01-30 10:17:18.360901851 +0000 UTC m=+342.927205675" observedRunningTime="2026-01-30 10:17:18.989632659 +0000 UTC m=+343.555936493" watchObservedRunningTime="2026-01-30 10:17:18.99471038 +0000 UTC m=+343.561014214" Jan 30 10:17:19 crc kubenswrapper[4984]: I0130 10:17:19.960583 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hn9gx" event={"ID":"a725adac-ef1c-400b-bde2-756c97779906","Type":"ContainerStarted","Data":"65e5e09de7b9d1abf6b633becaa052b28ee393c234c2fe1eba6273ee4044e068"} Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.022190 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.022950 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.074328 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:22 crc kubenswrapper[4984]: I0130 10:17:22.099988 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hn9gx" podStartSLOduration=4.61704485 podStartE2EDuration="7.099971657s" podCreationTimestamp="2026-01-30 10:17:15 +0000 UTC" firstStartedPulling="2026-01-30 10:17:16.890613834 +0000 UTC m=+341.456917658" lastFinishedPulling="2026-01-30 10:17:19.373540641 +0000 UTC m=+343.939844465" observedRunningTime="2026-01-30 10:17:19.983921757 +0000 UTC m=+344.550225581" watchObservedRunningTime="2026-01-30 10:17:22.099971657 +0000 UTC m=+346.666275481" Jan 30 10:17:23 crc kubenswrapper[4984]: I0130 10:17:23.012177 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8prhf" Jan 30 10:17:23 crc kubenswrapper[4984]: I0130 10:17:23.415162 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:23 crc kubenswrapper[4984]: I0130 10:17:23.415569 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.431969 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.432054 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.469971 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47j92" podUID="24af9dab-3f7a-4433-b367-5ecafcf89754" containerName="registry-server" probeResult="failure" output=< Jan 30 10:17:24 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:17:24 crc kubenswrapper[4984]: > Jan 30 10:17:24 crc kubenswrapper[4984]: I0130 10:17:24.483633 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.027607 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zckjp" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.855338 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.855852 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:25 crc kubenswrapper[4984]: I0130 10:17:25.899867 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:26 crc kubenswrapper[4984]: I0130 10:17:26.032966 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hn9gx" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.799667 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nk8tk"] Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.800754 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.830052 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nk8tk"] Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894724 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894774 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lm6\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-kube-api-access-v7lm6\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894820 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-certificates\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.894849 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3dc9055-604a-4c4d-b57e-de76e82bcc80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895024 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-trusted-ca\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895114 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3dc9055-604a-4c4d-b57e-de76e82bcc80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-bound-sa-token\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.895183 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-tls\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.915389 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.996932 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-certificates\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.996994 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3dc9055-604a-4c4d-b57e-de76e82bcc80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-trusted-ca\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3dc9055-604a-4c4d-b57e-de76e82bcc80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997109 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-bound-sa-token\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997136 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-tls\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997572 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7lm6\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-kube-api-access-v7lm6\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.997646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b3dc9055-604a-4c4d-b57e-de76e82bcc80-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.998437 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-certificates\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:30 crc kubenswrapper[4984]: I0130 10:17:30.999118 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3dc9055-604a-4c4d-b57e-de76e82bcc80-trusted-ca\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.005082 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-registry-tls\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.008910 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b3dc9055-604a-4c4d-b57e-de76e82bcc80-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.021340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-bound-sa-token\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.022393 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7lm6\" (UniqueName: \"kubernetes.io/projected/b3dc9055-604a-4c4d-b57e-de76e82bcc80-kube-api-access-v7lm6\") pod \"image-registry-66df7c8f76-nk8tk\" (UID: \"b3dc9055-604a-4c4d-b57e-de76e82bcc80\") " pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.115236 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:31 crc kubenswrapper[4984]: I0130 10:17:31.545474 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nk8tk"] Jan 30 10:17:31 crc kubenswrapper[4984]: W0130 10:17:31.551416 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3dc9055_604a_4c4d_b57e_de76e82bcc80.slice/crio-fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73 WatchSource:0}: Error finding container fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73: Status 404 returned error can't find the container with id fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73 Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.026782 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" event={"ID":"b3dc9055-604a-4c4d-b57e-de76e82bcc80","Type":"ContainerStarted","Data":"ae9781c5015bfacf2d93b2701ee339ff0f4c375c49cc5ca8487ffe64f37ba9e0"} Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.026824 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" event={"ID":"b3dc9055-604a-4c4d-b57e-de76e82bcc80","Type":"ContainerStarted","Data":"fc6d4ea3526015fac5834fe5e3867e9a8b62cfb24d53f0471ec8f249898d6c73"} Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.027809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:32 crc kubenswrapper[4984]: I0130 10:17:32.044938 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" podStartSLOduration=2.044925308 podStartE2EDuration="2.044925308s" podCreationTimestamp="2026-01-30 10:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:32.041603165 +0000 UTC m=+356.607906989" watchObservedRunningTime="2026-01-30 10:17:32.044925308 +0000 UTC m=+356.611229122" Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.000456 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.000827 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.453016 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:33 crc kubenswrapper[4984]: I0130 10:17:33.492389 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-47j92" Jan 30 10:17:39 crc kubenswrapper[4984]: I0130 10:17:39.590440 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:17:39 crc kubenswrapper[4984]: I0130 10:17:39.591305 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" containerID="cri-o://d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" gracePeriod=30 Jan 30 10:17:39 crc kubenswrapper[4984]: I0130 10:17:39.948125 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064157 4984 generic.go:334] "Generic (PLEG): container finished" podID="651b92be-48ed-4019-8a48-91138fdcd356" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" exitCode=0 Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064226 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064215 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerDied","Data":"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c"} Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064552 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv" event={"ID":"651b92be-48ed-4019-8a48-91138fdcd356","Type":"ContainerDied","Data":"365e334180612a639f5ba661049874fb4f2f877225cb9d8766b3099b7bb63022"} Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.064577 4984 scope.go:117] "RemoveContainer" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.084934 4984 scope.go:117] "RemoveContainer" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" Jan 30 10:17:40 crc kubenswrapper[4984]: E0130 10:17:40.085406 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c\": container with ID starting with d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c not found: ID does not exist" containerID="d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.085439 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c"} err="failed to get container status \"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c\": rpc error: code = NotFound desc = could not find container \"d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c\": container with ID starting with d3e08fb6ebdae25fc37ae6a30e3418632aa6c6af540ce153df4f4fa4b83efe0c not found: ID does not exist" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.116964 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.117081 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.117156 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.117190 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") pod \"651b92be-48ed-4019-8a48-91138fdcd356\" (UID: \"651b92be-48ed-4019-8a48-91138fdcd356\") " Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.118224 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca" (OuterVolumeSpecName: "client-ca") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.119368 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config" (OuterVolumeSpecName: "config") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.119525 4984 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.119559 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651b92be-48ed-4019-8a48-91138fdcd356-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.123956 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.123986 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w" (OuterVolumeSpecName: "kube-api-access-hws8w") pod "651b92be-48ed-4019-8a48-91138fdcd356" (UID: "651b92be-48ed-4019-8a48-91138fdcd356"). InnerVolumeSpecName "kube-api-access-hws8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.221416 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hws8w\" (UniqueName: \"kubernetes.io/projected/651b92be-48ed-4019-8a48-91138fdcd356-kube-api-access-hws8w\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.221454 4984 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651b92be-48ed-4019-8a48-91138fdcd356-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.395960 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:17:40 crc kubenswrapper[4984]: I0130 10:17:40.401220 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76b5f4d9cd-ngrtv"] Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.461477 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8694669774-42krh"] Jan 30 10:17:41 crc kubenswrapper[4984]: E0130 10:17:41.462467 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.462500 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.462593 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="651b92be-48ed-4019-8a48-91138fdcd356" containerName="route-controller-manager" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.463008 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.464940 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.464989 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.465762 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.466016 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.466108 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.466187 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.475739 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8694669774-42krh"] Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637100 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-config\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637147 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5n8l\" (UniqueName: \"kubernetes.io/projected/4d14038f-705c-4e89-8fb3-fee372da5d38-kube-api-access-q5n8l\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637533 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-client-ca\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.637618 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d14038f-705c-4e89-8fb3-fee372da5d38-serving-cert\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.738818 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-client-ca\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.738897 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d14038f-705c-4e89-8fb3-fee372da5d38-serving-cert\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.738957 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-config\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.739005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5n8l\" (UniqueName: \"kubernetes.io/projected/4d14038f-705c-4e89-8fb3-fee372da5d38-kube-api-access-q5n8l\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.740291 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-config\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.740520 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d14038f-705c-4e89-8fb3-fee372da5d38-client-ca\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.744539 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d14038f-705c-4e89-8fb3-fee372da5d38-serving-cert\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.760575 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5n8l\" (UniqueName: \"kubernetes.io/projected/4d14038f-705c-4e89-8fb3-fee372da5d38-kube-api-access-q5n8l\") pod \"route-controller-manager-8694669774-42krh\" (UID: \"4d14038f-705c-4e89-8fb3-fee372da5d38\") " pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:41 crc kubenswrapper[4984]: I0130 10:17:41.818964 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:42 crc kubenswrapper[4984]: I0130 10:17:42.097940 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651b92be-48ed-4019-8a48-91138fdcd356" path="/var/lib/kubelet/pods/651b92be-48ed-4019-8a48-91138fdcd356/volumes" Jan 30 10:17:42 crc kubenswrapper[4984]: I0130 10:17:42.219417 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8694669774-42krh"] Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.083333 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" event={"ID":"4d14038f-705c-4e89-8fb3-fee372da5d38","Type":"ContainerStarted","Data":"4e55769d87baf973f2b758cc1a2904d9f88fc25f680db481ff582d45fa1bddf3"} Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.083389 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" event={"ID":"4d14038f-705c-4e89-8fb3-fee372da5d38","Type":"ContainerStarted","Data":"36037868e40bcec16ed23d16e1cd857c89473c25ee8745ba9e347d5db46f52ed"} Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.085227 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.091143 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" Jan 30 10:17:43 crc kubenswrapper[4984]: I0130 10:17:43.108791 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8694669774-42krh" podStartSLOduration=4.108774891 podStartE2EDuration="4.108774891s" podCreationTimestamp="2026-01-30 10:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:17:43.105113269 +0000 UTC m=+367.671417103" watchObservedRunningTime="2026-01-30 10:17:43.108774891 +0000 UTC m=+367.675078715" Jan 30 10:17:51 crc kubenswrapper[4984]: I0130 10:17:51.122948 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nk8tk" Jan 30 10:17:51 crc kubenswrapper[4984]: I0130 10:17:51.209195 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.001374 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.001816 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.001959 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.002838 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.003233 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761" gracePeriod=600 Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.223349 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761" exitCode=0 Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.223417 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761"} Jan 30 10:18:03 crc kubenswrapper[4984]: I0130 10:18:03.223467 4984 scope.go:117] "RemoveContainer" containerID="e69989d4f682e18cba80117e9bacc7b6ed7ec748de2ad08c1d187265a494e87e" Jan 30 10:18:04 crc kubenswrapper[4984]: I0130 10:18:04.232296 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4"} Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.277574 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" containerID="cri-o://2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" gracePeriod=30 Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.734993 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877407 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877500 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877534 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877584 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877734 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877777 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.877866 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") pod \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\" (UID: \"d3d42d7f-49ec-4169-a79d-f46ccd275e20\") " Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.879494 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.879640 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.885972 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.886765 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.888375 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.891618 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.893130 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j" (OuterVolumeSpecName: "kube-api-access-8wq2j") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "kube-api-access-8wq2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.913480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d3d42d7f-49ec-4169-a79d-f46ccd275e20" (UID: "d3d42d7f-49ec-4169-a79d-f46ccd275e20"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979553 4984 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3d42d7f-49ec-4169-a79d-f46ccd275e20-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979618 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979639 4984 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979659 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wq2j\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-kube-api-access-8wq2j\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979681 4984 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3d42d7f-49ec-4169-a79d-f46ccd275e20-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979701 4984 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d42d7f-49ec-4169-a79d-f46ccd275e20-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:16 crc kubenswrapper[4984]: I0130 10:18:16.979721 4984 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3d42d7f-49ec-4169-a79d-f46ccd275e20-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325667 4984 generic.go:334] "Generic (PLEG): container finished" podID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" exitCode=0 Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325725 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerDied","Data":"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839"} Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325772 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" event={"ID":"d3d42d7f-49ec-4169-a79d-f46ccd275e20","Type":"ContainerDied","Data":"79705b85e33c0776d034e28c0f0671763dc639d3eee8637beef1fb06cd051685"} Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325770 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.325796 4984 scope.go:117] "RemoveContainer" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.350407 4984 scope.go:117] "RemoveContainer" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" Jan 30 10:18:17 crc kubenswrapper[4984]: E0130 10:18:17.352044 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839\": container with ID starting with 2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839 not found: ID does not exist" containerID="2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.352088 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839"} err="failed to get container status \"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839\": rpc error: code = NotFound desc = could not find container \"2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839\": container with ID starting with 2f7ec7486a294c46581fe24b9b9f813d4dda3b25443b56c9f300d1fad9d19839 not found: ID does not exist" Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.383147 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:18:17 crc kubenswrapper[4984]: I0130 10:18:17.395007 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv7sn"] Jan 30 10:18:18 crc kubenswrapper[4984]: I0130 10:18:18.101515 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" path="/var/lib/kubelet/pods/d3d42d7f-49ec-4169-a79d-f46ccd275e20/volumes" Jan 30 10:18:21 crc kubenswrapper[4984]: I0130 10:18:21.708113 4984 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-lv7sn container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.28:5000/healthz\": dial tcp 10.217.0.28:5000: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 10:18:21 crc kubenswrapper[4984]: I0130 10:18:21.708210 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-lv7sn" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.28:5000/healthz\": dial tcp 10.217.0.28:5000: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 30 10:20:03 crc kubenswrapper[4984]: I0130 10:20:03.000881 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:20:03 crc kubenswrapper[4984]: I0130 10:20:03.001963 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:20:33 crc kubenswrapper[4984]: I0130 10:20:33.000641 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:20:33 crc kubenswrapper[4984]: I0130 10:20:33.001313 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.000678 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.001286 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.001335 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.001939 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:21:03 crc kubenswrapper[4984]: I0130 10:21:03.002006 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4" gracePeriod=600 Jan 30 10:21:04 crc kubenswrapper[4984]: I0130 10:21:04.401168 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4" exitCode=0 Jan 30 10:21:04 crc kubenswrapper[4984]: I0130 10:21:04.401303 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4"} Jan 30 10:21:04 crc kubenswrapper[4984]: I0130 10:21:04.401803 4984 scope.go:117] "RemoveContainer" containerID="fb9180fd0ed617032aaf0573c6624fc2b1d960bd1b14e7d52aa89bccc115c761" Jan 30 10:21:05 crc kubenswrapper[4984]: I0130 10:21:05.408622 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71"} Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.732533 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm"] Jan 30 10:22:40 crc kubenswrapper[4984]: E0130 10:22:40.733368 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.733384 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.733487 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d42d7f-49ec-4169-a79d-f46ccd275e20" containerName="registry" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.733919 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.736434 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.736968 4984 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-682sg" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.739106 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.747242 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.751754 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-rlb95"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.752649 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.755517 4984 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-trd9q" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.766044 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rlb95"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.780855 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r7gsp"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.781744 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.783677 4984 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s2p7s" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.792036 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r7gsp"] Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.878591 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtp6t\" (UniqueName: \"kubernetes.io/projected/4a218ad6-abfb-49ac-9f07-a79d9f3bd07e-kube-api-access-qtp6t\") pod \"cert-manager-webhook-687f57d79b-r7gsp\" (UID: \"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.878658 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f1c83115-1333-4064-8217-eb2edae57d74-kube-api-access-jfdzh\") pod \"cert-manager-858654f9db-rlb95\" (UID: \"f1c83115-1333-4064-8217-eb2edae57d74\") " pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.878689 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584nw\" (UniqueName: \"kubernetes.io/projected/c7557472-15a5-48a9-8a84-bd8478d45a4b-kube-api-access-584nw\") pod \"cert-manager-cainjector-cf98fcc89-2f5gm\" (UID: \"c7557472-15a5-48a9-8a84-bd8478d45a4b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.979345 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f1c83115-1333-4064-8217-eb2edae57d74-kube-api-access-jfdzh\") pod \"cert-manager-858654f9db-rlb95\" (UID: \"f1c83115-1333-4064-8217-eb2edae57d74\") " pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.979385 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584nw\" (UniqueName: \"kubernetes.io/projected/c7557472-15a5-48a9-8a84-bd8478d45a4b-kube-api-access-584nw\") pod \"cert-manager-cainjector-cf98fcc89-2f5gm\" (UID: \"c7557472-15a5-48a9-8a84-bd8478d45a4b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.979636 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtp6t\" (UniqueName: \"kubernetes.io/projected/4a218ad6-abfb-49ac-9f07-a79d9f3bd07e-kube-api-access-qtp6t\") pod \"cert-manager-webhook-687f57d79b-r7gsp\" (UID: \"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:40 crc kubenswrapper[4984]: I0130 10:22:40.998925 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584nw\" (UniqueName: \"kubernetes.io/projected/c7557472-15a5-48a9-8a84-bd8478d45a4b-kube-api-access-584nw\") pod \"cert-manager-cainjector-cf98fcc89-2f5gm\" (UID: \"c7557472-15a5-48a9-8a84-bd8478d45a4b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.002184 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f1c83115-1333-4064-8217-eb2edae57d74-kube-api-access-jfdzh\") pod \"cert-manager-858654f9db-rlb95\" (UID: \"f1c83115-1333-4064-8217-eb2edae57d74\") " pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.003689 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtp6t\" (UniqueName: \"kubernetes.io/projected/4a218ad6-abfb-49ac-9f07-a79d9f3bd07e-kube-api-access-qtp6t\") pod \"cert-manager-webhook-687f57d79b-r7gsp\" (UID: \"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.050866 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.072550 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rlb95" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.096670 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.527044 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rlb95"] Jan 30 10:22:41 crc kubenswrapper[4984]: W0130 10:22:41.534040 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c83115_1333_4064_8217_eb2edae57d74.slice/crio-25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6 WatchSource:0}: Error finding container 25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6: Status 404 returned error can't find the container with id 25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6 Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.536274 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.582335 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm"] Jan 30 10:22:41 crc kubenswrapper[4984]: I0130 10:22:41.583127 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-r7gsp"] Jan 30 10:22:41 crc kubenswrapper[4984]: W0130 10:22:41.591634 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a218ad6_abfb_49ac_9f07_a79d9f3bd07e.slice/crio-6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11 WatchSource:0}: Error finding container 6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11: Status 404 returned error can't find the container with id 6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11 Jan 30 10:22:42 crc kubenswrapper[4984]: I0130 10:22:42.000155 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" event={"ID":"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e","Type":"ContainerStarted","Data":"6602222407eb08104e75739e087c8508bb9261ab8d9366bec2402b50c0162d11"} Jan 30 10:22:42 crc kubenswrapper[4984]: I0130 10:22:42.001335 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rlb95" event={"ID":"f1c83115-1333-4064-8217-eb2edae57d74","Type":"ContainerStarted","Data":"25ab55ca4d81fccd929a7228417d20939855c8567f379f6b9d4ebe70e1d86ee6"} Jan 30 10:22:42 crc kubenswrapper[4984]: I0130 10:22:42.003371 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" event={"ID":"c7557472-15a5-48a9-8a84-bd8478d45a4b","Type":"ContainerStarted","Data":"970a974528026adafa77b3fb3fde690a63f0f21700fb55408af88b1d78d47763"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.024913 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" event={"ID":"4a218ad6-abfb-49ac-9f07-a79d9f3bd07e","Type":"ContainerStarted","Data":"fe5e441a51010fd90b77deec990bae6548eef663f2cac824fca41f892eed9f16"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.025527 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.028685 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rlb95" event={"ID":"f1c83115-1333-4064-8217-eb2edae57d74","Type":"ContainerStarted","Data":"66ede26cfd344150079cc17a5f99f7653c59b0a02a491548ee53c4c64335a390"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.030189 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" event={"ID":"c7557472-15a5-48a9-8a84-bd8478d45a4b","Type":"ContainerStarted","Data":"a6084e0d962926ecf26fa33dd874ef8e42195380093c74563e5f9b16b7d2c053"} Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.041378 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" podStartSLOduration=2.716812713 podStartE2EDuration="6.041359704s" podCreationTimestamp="2026-01-30 10:22:40 +0000 UTC" firstStartedPulling="2026-01-30 10:22:41.59456543 +0000 UTC m=+666.160869264" lastFinishedPulling="2026-01-30 10:22:44.919112411 +0000 UTC m=+669.485416255" observedRunningTime="2026-01-30 10:22:46.040165113 +0000 UTC m=+670.606468947" watchObservedRunningTime="2026-01-30 10:22:46.041359704 +0000 UTC m=+670.607663538" Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.056562 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-rlb95" podStartSLOduration=2.692131299 podStartE2EDuration="6.056536317s" podCreationTimestamp="2026-01-30 10:22:40 +0000 UTC" firstStartedPulling="2026-01-30 10:22:41.536022138 +0000 UTC m=+666.102325952" lastFinishedPulling="2026-01-30 10:22:44.900427156 +0000 UTC m=+669.466730970" observedRunningTime="2026-01-30 10:22:46.053296271 +0000 UTC m=+670.619600095" watchObservedRunningTime="2026-01-30 10:22:46.056536317 +0000 UTC m=+670.622840151" Jan 30 10:22:46 crc kubenswrapper[4984]: I0130 10:22:46.073589 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2f5gm" podStartSLOduration=2.744298642 podStartE2EDuration="6.073570938s" podCreationTimestamp="2026-01-30 10:22:40 +0000 UTC" firstStartedPulling="2026-01-30 10:22:41.586673081 +0000 UTC m=+666.152977035" lastFinishedPulling="2026-01-30 10:22:44.915945507 +0000 UTC m=+669.482249331" observedRunningTime="2026-01-30 10:22:46.070783114 +0000 UTC m=+670.637086978" watchObservedRunningTime="2026-01-30 10:22:46.073570938 +0000 UTC m=+670.639874782" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.229218 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrm2v"] Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.229999 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" containerID="cri-o://04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230069 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" containerID="cri-o://703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230143 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" containerID="cri-o://6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230197 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230231 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" containerID="cri-o://02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230283 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" containerID="cri-o://92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.230309 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" containerID="cri-o://84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.269482 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" containerID="cri-o://ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" gracePeriod=30 Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.530313 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.532781 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-acl-logging/0.log" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.533451 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-controller/0.log" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.534070 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604147 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x29cg"] Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604391 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604405 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604416 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604423 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604431 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604437 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604450 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604458 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604470 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604477 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604485 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604491 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604500 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604507 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604515 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604521 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604533 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kubecfg-setup" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604540 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kubecfg-setup" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604548 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604555 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604565 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604572 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604678 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604694 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604704 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="kube-rbac-proxy-node" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604712 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="sbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604723 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="nbdb" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604733 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604744 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604753 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604760 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604768 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="northd" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604776 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovn-acl-logging" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604865 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604874 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: E0130 10:22:50.604884 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604890 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.604989 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" containerName="ovnkube-controller" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.606748 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.624753 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.624854 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.624947 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-kube-api-access-7m4h6\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625027 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-netd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625065 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-ovn\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625087 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-kubelet\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625101 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-config\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625115 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-netns\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625133 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-systemd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625152 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-etc-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625197 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-bin\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625213 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovn-node-metrics-cert\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625286 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-script-lib\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625310 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625327 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-slash\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625358 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-systemd-units\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-node-log\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625420 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-env-overrides\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625438 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-log-socket\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625455 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-var-lib-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.625497 4984 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.726964 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727034 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727105 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727132 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727162 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727185 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727207 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727227 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727274 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727306 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727334 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727361 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727382 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727409 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727432 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727456 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727478 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727499 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727518 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") pod \"000a8c9a-5211-4997-8b97-d37e227c899a\" (UID: \"000a8c9a-5211-4997-8b97-d37e227c899a\") " Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727622 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-kube-api-access-7m4h6\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727653 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-netd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727686 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-ovn\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727710 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-kubelet\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727730 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-config\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727757 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-netns\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727789 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-systemd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727821 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727853 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-etc-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727919 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-bin\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727945 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovn-node-metrics-cert\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727972 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.727993 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-script-lib\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728020 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728041 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-slash\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728070 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-systemd-units\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728090 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-node-log\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728114 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-env-overrides\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728139 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-log-socket\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728158 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-var-lib-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728237 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-var-lib-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728309 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728377 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash" (OuterVolumeSpecName: "host-slash") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728405 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728432 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728457 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728483 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-netd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728664 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-ovn\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728704 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-kubelet\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728620 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log" (OuterVolumeSpecName: "node-log") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728864 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket" (OuterVolumeSpecName: "log-socket") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728855 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-config\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729300 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729337 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-netns\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729367 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-systemd\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729639 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovnkube-script-lib\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729665 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-slash\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729568 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-cni-bin\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729711 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-etc-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729712 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-run-openvswitch\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729733 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-log-socket\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729753 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-node-log\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728957 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.730008 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-env-overrides\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-host-run-ovn-kubernetes\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729475 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729502 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729523 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729907 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.729971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.728935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-systemd-units\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.730399 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.735584 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb" (OuterVolumeSpecName: "kube-api-access-q7vnb") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "kube-api-access-q7vnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.735871 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.737817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-ovn-node-metrics-cert\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.751917 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "000a8c9a-5211-4997-8b97-d37e227c899a" (UID: "000a8c9a-5211-4997-8b97-d37e227c899a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.760005 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4h6\" (UniqueName: \"kubernetes.io/projected/358ad7a5-08e4-49b4-94c6-e2cdaa29d78b-kube-api-access-7m4h6\") pod \"ovnkube-node-x29cg\" (UID: \"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b\") " pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830369 4984 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830432 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830452 4984 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830468 4984 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830485 4984 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830501 4984 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830517 4984 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830539 4984 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830555 4984 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830571 4984 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830588 4984 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830604 4984 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830620 4984 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/000a8c9a-5211-4997-8b97-d37e227c899a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830636 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/000a8c9a-5211-4997-8b97-d37e227c899a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830653 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vnb\" (UniqueName: \"kubernetes.io/projected/000a8c9a-5211-4997-8b97-d37e227c899a-kube-api-access-q7vnb\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830669 4984 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830685 4984 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830702 4984 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.830721 4984 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/000a8c9a-5211-4997-8b97-d37e227c899a-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 10:22:50 crc kubenswrapper[4984]: I0130 10:22:50.932085 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:50 crc kubenswrapper[4984]: W0130 10:22:50.961381 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod358ad7a5_08e4_49b4_94c6_e2cdaa29d78b.slice/crio-959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038 WatchSource:0}: Error finding container 959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038: Status 404 returned error can't find the container with id 959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.063326 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/2.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064228 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/1.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064326 4984 generic.go:334] "Generic (PLEG): container finished" podID="0c5bace6-b520-4c9e-be10-a66fea4f9130" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" exitCode=2 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064437 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerDied","Data":"8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.064498 4984 scope.go:117] "RemoveContainer" containerID="d1f0aa523bd92a390a62877c48fa44acc2f1288b219847ee9ae583f14cfe3de2" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.067350 4984 scope.go:117] "RemoveContainer" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.069481 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bnkpj_openshift-multus(0c5bace6-b520-4c9e-be10-a66fea4f9130)\"" pod="openshift-multus/multus-bnkpj" podUID="0c5bace6-b520-4c9e-be10-a66fea4f9130" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.070886 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovnkube-controller/3.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.074734 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-acl-logging/0.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.075470 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xrm2v_000a8c9a-5211-4997-8b97-d37e227c899a/ovn-controller/0.log" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076389 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076432 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076453 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076473 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076496 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076515 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" exitCode=0 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076532 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" exitCode=143 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076550 4984 generic.go:334] "Generic (PLEG): container finished" podID="000a8c9a-5211-4997-8b97-d37e227c899a" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" exitCode=143 Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076694 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076725 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076779 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076804 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076831 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076854 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076870 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076887 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076903 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076917 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076932 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076945 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076959 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076972 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.076993 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077016 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077032 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077046 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077061 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077075 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077088 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077103 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077117 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077130 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077145 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077165 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077188 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077205 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077222 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077236 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077290 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077309 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077323 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077338 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077353 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077369 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077390 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" event={"ID":"000a8c9a-5211-4997-8b97-d37e227c899a","Type":"ContainerDied","Data":"9a5c5f0c87eb230fd06c2a946e269e2d2a3860384327e26e9cd419f72e754050"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077414 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077431 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077447 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077462 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077476 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077491 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077505 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077518 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077531 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077544 4984 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.077796 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xrm2v" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.080193 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"959617f1e4cb010c00ff6bdef7ff995307e67d60eb1d6a5d66dccb76c37a0038"} Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.103488 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-r7gsp" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.116222 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.158752 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.174924 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrm2v"] Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.181148 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xrm2v"] Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.220643 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.240061 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.258467 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.271190 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.289301 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.303587 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.319586 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.337594 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.374923 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.375441 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375469 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375489 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.375893 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375919 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.375941 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.376281 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376312 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376326 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.376615 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376634 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.376649 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.377027 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377081 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377114 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.377536 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377567 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.377585 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.378083 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378119 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378140 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.378517 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378550 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.378571 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.378967 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379010 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379038 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: E0130 10:22:51.379389 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379414 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379436 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379853 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.379874 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380193 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380218 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380553 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380572 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380846 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.380883 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382021 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382063 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382430 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382455 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382781 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.382801 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383090 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383107 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383531 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383550 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383839 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.383868 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.384817 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.384839 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385118 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385137 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385428 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385443 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385693 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385709 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.385998 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386013 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386295 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386309 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386559 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386588 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386882 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.386898 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387208 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387221 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387569 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387590 4984 scope.go:117] "RemoveContainer" containerID="ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.387971 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96"} err="failed to get container status \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": rpc error: code = NotFound desc = could not find container \"ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96\": container with ID starting with ceaeffd9205c02161930c43a7104125fa47902269b4d1ee8010bd47ea600de96 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388012 4984 scope.go:117] "RemoveContainer" containerID="309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388386 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6"} err="failed to get container status \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": rpc error: code = NotFound desc = could not find container \"309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6\": container with ID starting with 309f4bcc61f1fb092ba8e5cfd13a3150cde741378640da696f9dc566d55e0ac6 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388403 4984 scope.go:117] "RemoveContainer" containerID="02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388693 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab"} err="failed to get container status \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": rpc error: code = NotFound desc = could not find container \"02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab\": container with ID starting with 02af1acec40f0a17ca06c7cb1fb84a428df82add07bf70a898d019e45c8b3cab not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.388713 4984 scope.go:117] "RemoveContainer" containerID="703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389329 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce"} err="failed to get container status \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": rpc error: code = NotFound desc = could not find container \"703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce\": container with ID starting with 703d6461fd8ba1cfe0a58b569495f5e6ac02882dcfe79336e95cc6cba243f6ce not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389487 4984 scope.go:117] "RemoveContainer" containerID="6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389853 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc"} err="failed to get container status \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": rpc error: code = NotFound desc = could not find container \"6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc\": container with ID starting with 6477a9af417e873e5a1379482f29cab1d934e56ada6bebc17bc5a9fb0495a0dc not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.389871 4984 scope.go:117] "RemoveContainer" containerID="4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390172 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5"} err="failed to get container status \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": rpc error: code = NotFound desc = could not find container \"4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5\": container with ID starting with 4ac5ad3abe85c60cc68a993d3e9ce4a6f6a874b8b9acb9ddc0a801c2bdc9cff5 not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390189 4984 scope.go:117] "RemoveContainer" containerID="92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390502 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d"} err="failed to get container status \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": rpc error: code = NotFound desc = could not find container \"92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d\": container with ID starting with 92e09eb6772c210b9076fbd49176a78aee894078d8c2a4f255be133ebfdfaa6d not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390527 4984 scope.go:117] "RemoveContainer" containerID="84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390823 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c"} err="failed to get container status \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": rpc error: code = NotFound desc = could not find container \"84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c\": container with ID starting with 84d4f71051ac3ea9cbbe7a71f185e3ddb26d71e16c29fbb6f52bf55d1f9e543c not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.390839 4984 scope.go:117] "RemoveContainer" containerID="04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.391162 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf"} err="failed to get container status \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": rpc error: code = NotFound desc = could not find container \"04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf\": container with ID starting with 04fcd50acc348750a6b79fe6bad61bb888dbcb78e211efe3bdb4d691b77de6bf not found: ID does not exist" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.391199 4984 scope.go:117] "RemoveContainer" containerID="452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71" Jan 30 10:22:51 crc kubenswrapper[4984]: I0130 10:22:51.391645 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71"} err="failed to get container status \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": rpc error: code = NotFound desc = could not find container \"452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71\": container with ID starting with 452682d07c56edf6ab8484e3fda0d7a30aebdc5cef8e6908f7aab70305cc7b71 not found: ID does not exist" Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.090210 4984 generic.go:334] "Generic (PLEG): container finished" podID="358ad7a5-08e4-49b4-94c6-e2cdaa29d78b" containerID="1c4864b5740296b99fe6fc3d714405a14c1db55be936eaf266be69544a651ab8" exitCode=0 Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.093204 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/2.log" Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.100171 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000a8c9a-5211-4997-8b97-d37e227c899a" path="/var/lib/kubelet/pods/000a8c9a-5211-4997-8b97-d37e227c899a/volumes" Jan 30 10:22:52 crc kubenswrapper[4984]: I0130 10:22:52.103170 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerDied","Data":"1c4864b5740296b99fe6fc3d714405a14c1db55be936eaf266be69544a651ab8"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103413 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"54234ab55a96b7dfeee1dd713fedd3fc5afd2729b3deb0c7362ca6e1cc006ab0"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103673 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"96f74a19cc189667951bdf060cacadfd6379a5b239ca05987aa7c327d1efb258"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103685 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"4d32fcfe0176e4207fcccc05d2ccf8d2003ea50159f349835b538052ce16abf5"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103694 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"b3bd7ca4c574acd0c0d43f4c52c18304d702f9ddb2eec342f31a30464d04adc6"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103701 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"eb676a53e964cf459a78e7ab084c5af3ccc07714c10fb98c969c9a0a17325c1c"} Jan 30 10:22:53 crc kubenswrapper[4984]: I0130 10:22:53.103708 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"311129d4c9e851a5450202ed349d55de22f01190a81e0f40738ea385012b365d"} Jan 30 10:22:56 crc kubenswrapper[4984]: I0130 10:22:56.124554 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"0b6d4174a61d6b37eaeea67058dc60d45d9eb0d68a721c4c3b49d231bc7a8ddf"} Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143137 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" event={"ID":"358ad7a5-08e4-49b4-94c6-e2cdaa29d78b","Type":"ContainerStarted","Data":"8335331d4a0e32c46c674b569171a026f63af36bd96854156ed59179619b0361"} Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143940 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143959 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.143973 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.178519 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" podStartSLOduration=8.17850013 podStartE2EDuration="8.17850013s" podCreationTimestamp="2026-01-30 10:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:22:58.175335686 +0000 UTC m=+682.741639530" watchObservedRunningTime="2026-01-30 10:22:58.17850013 +0000 UTC m=+682.744803944" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.185430 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:22:58 crc kubenswrapper[4984]: I0130 10:22:58.189494 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:23:02 crc kubenswrapper[4984]: I0130 10:23:02.090428 4984 scope.go:117] "RemoveContainer" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" Jan 30 10:23:02 crc kubenswrapper[4984]: E0130 10:23:02.091391 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bnkpj_openshift-multus(0c5bace6-b520-4c9e-be10-a66fea4f9130)\"" pod="openshift-multus/multus-bnkpj" podUID="0c5bace6-b520-4c9e-be10-a66fea4f9130" Jan 30 10:23:14 crc kubenswrapper[4984]: I0130 10:23:14.090675 4984 scope.go:117] "RemoveContainer" containerID="8be930e4cf669583e0900e6287175bb306016d86bae832b1da4c9dc6b3c4baac" Jan 30 10:23:15 crc kubenswrapper[4984]: I0130 10:23:15.258883 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bnkpj_0c5bace6-b520-4c9e-be10-a66fea4f9130/kube-multus/2.log" Jan 30 10:23:15 crc kubenswrapper[4984]: I0130 10:23:15.259625 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bnkpj" event={"ID":"0c5bace6-b520-4c9e-be10-a66fea4f9130","Type":"ContainerStarted","Data":"cd92d66ad8d62c2e690c12a016dd84062559fd8d20c072207b7036f21cc178f8"} Jan 30 10:23:20 crc kubenswrapper[4984]: I0130 10:23:20.970287 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x29cg" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.801444 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4"] Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.803096 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.805721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.815389 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4"] Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.985204 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.985365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:30 crc kubenswrapper[4984]: I0130 10:23:30.985527 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.086904 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087030 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087567 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.087826 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.128847 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.424803 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:31 crc kubenswrapper[4984]: I0130 10:23:31.681519 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4"] Jan 30 10:23:32 crc kubenswrapper[4984]: I0130 10:23:32.525018 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerStarted","Data":"7eabc67f69dff0b134b4f09197340826973435e1a29511d08e6f438043f9e537"} Jan 30 10:23:32 crc kubenswrapper[4984]: I0130 10:23:32.525430 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerStarted","Data":"31ee50686050c73a5686453e58e6b4254dcb0b1869f215eec4bd80b928af14cd"} Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.000916 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.001052 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.531653 4984 generic.go:334] "Generic (PLEG): container finished" podID="790867b3-e261-4564-a2d4-ffc041c3a090" containerID="7eabc67f69dff0b134b4f09197340826973435e1a29511d08e6f438043f9e537" exitCode=0 Jan 30 10:23:33 crc kubenswrapper[4984]: I0130 10:23:33.531751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"7eabc67f69dff0b134b4f09197340826973435e1a29511d08e6f438043f9e537"} Jan 30 10:23:35 crc kubenswrapper[4984]: I0130 10:23:35.546994 4984 generic.go:334] "Generic (PLEG): container finished" podID="790867b3-e261-4564-a2d4-ffc041c3a090" containerID="8c00d1318781d55105db6d5905b0f11ee20e9abf1219646311720e3f885082b9" exitCode=0 Jan 30 10:23:35 crc kubenswrapper[4984]: I0130 10:23:35.547345 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"8c00d1318781d55105db6d5905b0f11ee20e9abf1219646311720e3f885082b9"} Jan 30 10:23:36 crc kubenswrapper[4984]: I0130 10:23:36.557036 4984 generic.go:334] "Generic (PLEG): container finished" podID="790867b3-e261-4564-a2d4-ffc041c3a090" containerID="5c0faac60da3ad3a6e14e54f13902a360eb743a81ead8696ea5dd06806a7932a" exitCode=0 Jan 30 10:23:36 crc kubenswrapper[4984]: I0130 10:23:36.557127 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"5c0faac60da3ad3a6e14e54f13902a360eb743a81ead8696ea5dd06806a7932a"} Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.882929 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.981212 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") pod \"790867b3-e261-4564-a2d4-ffc041c3a090\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.981390 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") pod \"790867b3-e261-4564-a2d4-ffc041c3a090\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.981424 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") pod \"790867b3-e261-4564-a2d4-ffc041c3a090\" (UID: \"790867b3-e261-4564-a2d4-ffc041c3a090\") " Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.982526 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle" (OuterVolumeSpecName: "bundle") pod "790867b3-e261-4564-a2d4-ffc041c3a090" (UID: "790867b3-e261-4564-a2d4-ffc041c3a090"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:23:37 crc kubenswrapper[4984]: I0130 10:23:37.987317 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k" (OuterVolumeSpecName: "kube-api-access-sh57k") pod "790867b3-e261-4564-a2d4-ffc041c3a090" (UID: "790867b3-e261-4564-a2d4-ffc041c3a090"). InnerVolumeSpecName "kube-api-access-sh57k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.003425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util" (OuterVolumeSpecName: "util") pod "790867b3-e261-4564-a2d4-ffc041c3a090" (UID: "790867b3-e261-4564-a2d4-ffc041c3a090"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.082931 4984 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-util\") on node \"crc\" DevicePath \"\"" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.082997 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh57k\" (UniqueName: \"kubernetes.io/projected/790867b3-e261-4564-a2d4-ffc041c3a090-kube-api-access-sh57k\") on node \"crc\" DevicePath \"\"" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.083025 4984 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/790867b3-e261-4564-a2d4-ffc041c3a090-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.573497 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" event={"ID":"790867b3-e261-4564-a2d4-ffc041c3a090","Type":"ContainerDied","Data":"31ee50686050c73a5686453e58e6b4254dcb0b1869f215eec4bd80b928af14cd"} Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.573557 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ee50686050c73a5686453e58e6b4254dcb0b1869f215eec4bd80b928af14cd" Jan 30 10:23:38 crc kubenswrapper[4984]: I0130 10:23:38.573593 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.371475 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tl42h"] Jan 30 10:23:42 crc kubenswrapper[4984]: E0130 10:23:42.372273 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="extract" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372287 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="extract" Jan 30 10:23:42 crc kubenswrapper[4984]: E0130 10:23:42.372302 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="util" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372312 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="util" Jan 30 10:23:42 crc kubenswrapper[4984]: E0130 10:23:42.372334 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="pull" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372341 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="pull" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372453 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="790867b3-e261-4564-a2d4-ffc041c3a090" containerName="extract" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.372875 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.375032 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7bhdk" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.376080 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.376241 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.414019 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tl42h"] Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.540790 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gtr\" (UniqueName: \"kubernetes.io/projected/ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b-kube-api-access-p2gtr\") pod \"nmstate-operator-646758c888-tl42h\" (UID: \"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b\") " pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.642339 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2gtr\" (UniqueName: \"kubernetes.io/projected/ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b-kube-api-access-p2gtr\") pod \"nmstate-operator-646758c888-tl42h\" (UID: \"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b\") " pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.675394 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2gtr\" (UniqueName: \"kubernetes.io/projected/ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b-kube-api-access-p2gtr\") pod \"nmstate-operator-646758c888-tl42h\" (UID: \"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b\") " pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:42 crc kubenswrapper[4984]: I0130 10:23:42.697423 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" Jan 30 10:23:43 crc kubenswrapper[4984]: I0130 10:23:43.010336 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tl42h"] Jan 30 10:23:43 crc kubenswrapper[4984]: W0130 10:23:43.034957 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce2396e1_20f3_4b5a_b3ab_4e8496d6c58b.slice/crio-8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505 WatchSource:0}: Error finding container 8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505: Status 404 returned error can't find the container with id 8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505 Jan 30 10:23:43 crc kubenswrapper[4984]: I0130 10:23:43.604592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" event={"ID":"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b","Type":"ContainerStarted","Data":"8b0107022c440dd7cc72bd75ce1483d76b0674da7e6f514042587c2117bb4505"} Jan 30 10:23:46 crc kubenswrapper[4984]: I0130 10:23:46.625483 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" event={"ID":"ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b","Type":"ContainerStarted","Data":"55d130a62cc92965f51d590619badaa1790cef40a4249b321a7a8b897b8de3b7"} Jan 30 10:23:46 crc kubenswrapper[4984]: I0130 10:23:46.647163 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tl42h" podStartSLOduration=2.250248619 podStartE2EDuration="4.647139515s" podCreationTimestamp="2026-01-30 10:23:42 +0000 UTC" firstStartedPulling="2026-01-30 10:23:43.040850244 +0000 UTC m=+727.607154098" lastFinishedPulling="2026-01-30 10:23:45.43774117 +0000 UTC m=+730.004044994" observedRunningTime="2026-01-30 10:23:46.642856895 +0000 UTC m=+731.209160739" watchObservedRunningTime="2026-01-30 10:23:46.647139515 +0000 UTC m=+731.213443349" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.301988 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7x2rq"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.304379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.308771 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-58wdc" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.330200 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.331795 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.338918 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.354632 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vh6vz"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.358461 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.389314 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7x2rq"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.393846 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.467983 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.468948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.476600 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.476941 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vgmj8" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.477003 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487340 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-nmstate-lock\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487405 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f85w4\" (UniqueName: \"kubernetes.io/projected/88dac402-7307-465d-b5a0-61762ee570c6-kube-api-access-f85w4\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487443 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487482 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gl5x\" (UniqueName: \"kubernetes.io/projected/f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf-kube-api-access-8gl5x\") pod \"nmstate-metrics-54757c584b-7x2rq\" (UID: \"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487512 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-dbus-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487539 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpnhj\" (UniqueName: \"kubernetes.io/projected/739c7b03-ba6e-48de-a07b-6bd4206c206f-kube-api-access-wpnhj\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.487593 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-ovs-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.492575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588498 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m8tk\" (UniqueName: \"kubernetes.io/projected/471cb540-b50e-4adb-8984-65c46a7f9714-kube-api-access-8m8tk\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588597 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-ovs-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588633 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-nmstate-lock\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588681 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/471cb540-b50e-4adb-8984-65c46a7f9714-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588708 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f85w4\" (UniqueName: \"kubernetes.io/projected/88dac402-7307-465d-b5a0-61762ee570c6-kube-api-access-f85w4\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588738 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588781 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gl5x\" (UniqueName: \"kubernetes.io/projected/f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf-kube-api-access-8gl5x\") pod \"nmstate-metrics-54757c584b-7x2rq\" (UID: \"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588811 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-dbus-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588835 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/471cb540-b50e-4adb-8984-65c46a7f9714-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.588872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpnhj\" (UniqueName: \"kubernetes.io/projected/739c7b03-ba6e-48de-a07b-6bd4206c206f-kube-api-access-wpnhj\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.589295 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-ovs-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.589334 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-nmstate-lock\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: E0130 10:23:51.589681 4984 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 10:23:51 crc kubenswrapper[4984]: E0130 10:23:51.589770 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair podName:739c7b03-ba6e-48de-a07b-6bd4206c206f nodeName:}" failed. No retries permitted until 2026-01-30 10:23:52.08974378 +0000 UTC m=+736.656047594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-gnkrh" (UID: "739c7b03-ba6e-48de-a07b-6bd4206c206f") : secret "openshift-nmstate-webhook" not found Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.590142 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/88dac402-7307-465d-b5a0-61762ee570c6-dbus-socket\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.616126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpnhj\" (UniqueName: \"kubernetes.io/projected/739c7b03-ba6e-48de-a07b-6bd4206c206f-kube-api-access-wpnhj\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.620489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gl5x\" (UniqueName: \"kubernetes.io/projected/f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf-kube-api-access-8gl5x\") pod \"nmstate-metrics-54757c584b-7x2rq\" (UID: \"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.624346 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f85w4\" (UniqueName: \"kubernetes.io/projected/88dac402-7307-465d-b5a0-61762ee570c6-kube-api-access-f85w4\") pod \"nmstate-handler-vh6vz\" (UID: \"88dac402-7307-465d-b5a0-61762ee570c6\") " pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.677785 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.690754 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/471cb540-b50e-4adb-8984-65c46a7f9714-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.691119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/471cb540-b50e-4adb-8984-65c46a7f9714-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.691293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m8tk\" (UniqueName: \"kubernetes.io/projected/471cb540-b50e-4adb-8984-65c46a7f9714-kube-api-access-8m8tk\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.692551 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/471cb540-b50e-4adb-8984-65c46a7f9714-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.701993 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/471cb540-b50e-4adb-8984-65c46a7f9714-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.710553 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.711111 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66c7556f7f-xktgt"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.711876 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.733059 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m8tk\" (UniqueName: \"kubernetes.io/projected/471cb540-b50e-4adb-8984-65c46a7f9714-kube-api-access-8m8tk\") pod \"nmstate-console-plugin-7754f76f8b-mpwzb\" (UID: \"471cb540-b50e-4adb-8984-65c46a7f9714\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.738242 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c7556f7f-xktgt"] Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-oauth-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792801 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-service-ca\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792824 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxktf\" (UniqueName: \"kubernetes.io/projected/6c16c4ad-ebc2-421d-8f7c-75beca032e68-kube-api-access-xxktf\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792844 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792886 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792916 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-oauth-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.792933 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-trusted-ca-bundle\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.798444 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.893944 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-oauth-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.894377 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-service-ca\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.894398 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxktf\" (UniqueName: \"kubernetes.io/projected/6c16c4ad-ebc2-421d-8f7c-75beca032e68-kube-api-access-xxktf\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895458 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-service-ca\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895533 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895542 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-oauth-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895573 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-oauth-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.895697 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-trusted-ca-bundle\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.896146 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.897162 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c16c4ad-ebc2-421d-8f7c-75beca032e68-trusted-ca-bundle\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.901455 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-serving-cert\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.903324 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c16c4ad-ebc2-421d-8f7c-75beca032e68-console-oauth-config\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.912168 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxktf\" (UniqueName: \"kubernetes.io/projected/6c16c4ad-ebc2-421d-8f7c-75beca032e68-kube-api-access-xxktf\") pod \"console-66c7556f7f-xktgt\" (UID: \"6c16c4ad-ebc2-421d-8f7c-75beca032e68\") " pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:51 crc kubenswrapper[4984]: I0130 10:23:51.925515 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7x2rq"] Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.019893 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb"] Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.087221 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.099097 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.101865 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/739c7b03-ba6e-48de-a07b-6bd4206c206f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-gnkrh\" (UID: \"739c7b03-ba6e-48de-a07b-6bd4206c206f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.302611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.487047 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c7556f7f-xktgt"] Jan 30 10:23:52 crc kubenswrapper[4984]: W0130 10:23:52.495421 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c16c4ad_ebc2_421d_8f7c_75beca032e68.slice/crio-6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12 WatchSource:0}: Error finding container 6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12: Status 404 returned error can't find the container with id 6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12 Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.500652 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh"] Jan 30 10:23:52 crc kubenswrapper[4984]: W0130 10:23:52.512422 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739c7b03_ba6e_48de_a07b_6bd4206c206f.slice/crio-b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826 WatchSource:0}: Error finding container b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826: Status 404 returned error can't find the container with id b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826 Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.663125 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" event={"ID":"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf","Type":"ContainerStarted","Data":"fbd87b4de766536ef0d7fdd0bbe39c8bc46dd273e2a7d40dbf78dd2152e9e965"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.664151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" event={"ID":"739c7b03-ba6e-48de-a07b-6bd4206c206f","Type":"ContainerStarted","Data":"b5353c7196e86c069f34c32b0427b1f2a035123dd4a819dec2d19f3f17cd6826"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.665216 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" event={"ID":"471cb540-b50e-4adb-8984-65c46a7f9714","Type":"ContainerStarted","Data":"2a5afaa45e1f227dd21154bd2a58becdb31a3db7f13ee924c15ad570f252307c"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.666534 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c7556f7f-xktgt" event={"ID":"6c16c4ad-ebc2-421d-8f7c-75beca032e68","Type":"ContainerStarted","Data":"323c227e1096b3aad356fb2f689f550794bba69c2c568323d06590d8c77e4732"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.666559 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c7556f7f-xktgt" event={"ID":"6c16c4ad-ebc2-421d-8f7c-75beca032e68","Type":"ContainerStarted","Data":"6e0cf92c1bd899d46428277a6e9e8194b39ccf61db4708e8bedfe4ba2f6ddd12"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.667791 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vh6vz" event={"ID":"88dac402-7307-465d-b5a0-61762ee570c6","Type":"ContainerStarted","Data":"76d5fecaf3d830e7dede162ae7b0528c97fa5f1f1f84d78ac7a81d396411fe8c"} Jan 30 10:23:52 crc kubenswrapper[4984]: I0130 10:23:52.689223 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66c7556f7f-xktgt" podStartSLOduration=1.6892003249999998 podStartE2EDuration="1.689200325s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:23:52.685322764 +0000 UTC m=+737.251626588" watchObservedRunningTime="2026-01-30 10:23:52.689200325 +0000 UTC m=+737.255504149" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.687008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" event={"ID":"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf","Type":"ContainerStarted","Data":"962aac82f3db923fefbe8f26abd28120cf7e49920eefcb668a7206070ae43125"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.690406 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" event={"ID":"739c7b03-ba6e-48de-a07b-6bd4206c206f","Type":"ContainerStarted","Data":"4fa9303285a3500acaf91a90eb757f24ec8c6ac837c63936ff10fb26403280c6"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.692642 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" event={"ID":"471cb540-b50e-4adb-8984-65c46a7f9714","Type":"ContainerStarted","Data":"45e29847d7467dbd735acaa0da348c30272e2c4c759469ec449a19369040b4fb"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.694388 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vh6vz" event={"ID":"88dac402-7307-465d-b5a0-61762ee570c6","Type":"ContainerStarted","Data":"39bf8d3e9c1a44a820271eb7b9e45df0b18994360c5f46ff62a7afc26df2954b"} Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.694491 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.735498 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" podStartSLOduration=2.496658696 podStartE2EDuration="4.735483362s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:52.514783151 +0000 UTC m=+737.081086975" lastFinishedPulling="2026-01-30 10:23:54.753607817 +0000 UTC m=+739.319911641" observedRunningTime="2026-01-30 10:23:55.707595357 +0000 UTC m=+740.273899261" watchObservedRunningTime="2026-01-30 10:23:55.735483362 +0000 UTC m=+740.301787176" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.736533 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mpwzb" podStartSLOduration=2.032040191 podStartE2EDuration="4.736526336s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:52.025587929 +0000 UTC m=+736.591891773" lastFinishedPulling="2026-01-30 10:23:54.730074064 +0000 UTC m=+739.296377918" observedRunningTime="2026-01-30 10:23:55.732358609 +0000 UTC m=+740.298662433" watchObservedRunningTime="2026-01-30 10:23:55.736526336 +0000 UTC m=+740.302830160" Jan 30 10:23:55 crc kubenswrapper[4984]: I0130 10:23:55.749820 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vh6vz" podStartSLOduration=1.796163635 podStartE2EDuration="4.749803808s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:51.776223586 +0000 UTC m=+736.342527410" lastFinishedPulling="2026-01-30 10:23:54.729863729 +0000 UTC m=+739.296167583" observedRunningTime="2026-01-30 10:23:55.746015169 +0000 UTC m=+740.312318993" watchObservedRunningTime="2026-01-30 10:23:55.749803808 +0000 UTC m=+740.316107632" Jan 30 10:23:56 crc kubenswrapper[4984]: I0130 10:23:56.717647 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:23:57 crc kubenswrapper[4984]: I0130 10:23:57.724033 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" event={"ID":"f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf","Type":"ContainerStarted","Data":"ba907763d3b7a9ef86acddd82a6c1fb5088f5446dab84517ab6d6acc7a20c28e"} Jan 30 10:23:57 crc kubenswrapper[4984]: I0130 10:23:57.746305 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-7x2rq" podStartSLOduration=1.717429617 podStartE2EDuration="6.746285677s" podCreationTimestamp="2026-01-30 10:23:51 +0000 UTC" firstStartedPulling="2026-01-30 10:23:51.93699725 +0000 UTC m=+736.503301074" lastFinishedPulling="2026-01-30 10:23:56.96585331 +0000 UTC m=+741.532157134" observedRunningTime="2026-01-30 10:23:57.740802538 +0000 UTC m=+742.307106392" watchObservedRunningTime="2026-01-30 10:23:57.746285677 +0000 UTC m=+742.312589501" Jan 30 10:24:01 crc kubenswrapper[4984]: I0130 10:24:01.755637 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vh6vz" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.088360 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.088855 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.098092 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.779146 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66c7556f7f-xktgt" Jan 30 10:24:02 crc kubenswrapper[4984]: I0130 10:24:02.842809 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:24:03 crc kubenswrapper[4984]: I0130 10:24:03.001218 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:24:03 crc kubenswrapper[4984]: I0130 10:24:03.001301 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:24:11 crc kubenswrapper[4984]: I0130 10:24:11.944413 4984 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 10:24:12 crc kubenswrapper[4984]: I0130 10:24:12.312135 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-gnkrh" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.804539 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg"] Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.806625 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.809128 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.817407 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg"] Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.902189 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-v2prt" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" containerID="cri-o://88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" gracePeriod=15 Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.909096 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.909640 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:27 crc kubenswrapper[4984]: I0130 10:24:27.909677 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.011160 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.011281 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.011342 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.012142 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.012620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.040553 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.161311 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.275894 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v2prt_6ca41dbd-8af6-43ac-af3d-b0cc6222264b/console/0.log" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.276378 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416547 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416646 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416749 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416779 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.416892 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") pod \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\" (UID: \"6ca41dbd-8af6-43ac-af3d-b0cc6222264b\") " Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.417581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.418114 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config" (OuterVolumeSpecName: "console-config") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.418131 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca" (OuterVolumeSpecName: "service-ca") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.418369 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.422909 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.423085 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl" (OuterVolumeSpecName: "kube-api-access-224pl") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "kube-api-access-224pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.423113 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6ca41dbd-8af6-43ac-af3d-b0cc6222264b" (UID: "6ca41dbd-8af6-43ac-af3d-b0cc6222264b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518196 4984 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518271 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-224pl\" (UniqueName: \"kubernetes.io/projected/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-kube-api-access-224pl\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518286 4984 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518301 4984 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518312 4984 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518323 4984 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.518334 4984 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6ca41dbd-8af6-43ac-af3d-b0cc6222264b-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.563817 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg"] Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.970558 4984 generic.go:334] "Generic (PLEG): container finished" podID="d796f450-1311-422f-9f63-324d0a624f15" containerID="318bec374a38e85e9c4f0c68d49ddc8cfb736bc2bb16687bf04f555f23d8ff40" exitCode=0 Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.970623 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"318bec374a38e85e9c4f0c68d49ddc8cfb736bc2bb16687bf04f555f23d8ff40"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.970646 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerStarted","Data":"5c1b15a6a18f98241e780ba14317fddd47db3e30c840e0aaf9dd2fb132e46c50"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974233 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v2prt_6ca41dbd-8af6-43ac-af3d-b0cc6222264b/console/0.log" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974294 4984 generic.go:334] "Generic (PLEG): container finished" podID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" exitCode=2 Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974325 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerDied","Data":"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974351 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v2prt" event={"ID":"6ca41dbd-8af6-43ac-af3d-b0cc6222264b","Type":"ContainerDied","Data":"1d107edce64a981b016ac18f64e3952e99a1d1ef26bb18f85c1948ec49ead73c"} Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974368 4984 scope.go:117] "RemoveContainer" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" Jan 30 10:24:28 crc kubenswrapper[4984]: I0130 10:24:28.974963 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v2prt" Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.002323 4984 scope.go:117] "RemoveContainer" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" Jan 30 10:24:29 crc kubenswrapper[4984]: E0130 10:24:29.003096 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd\": container with ID starting with 88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd not found: ID does not exist" containerID="88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd" Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.003147 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd"} err="failed to get container status \"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd\": rpc error: code = NotFound desc = could not find container \"88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd\": container with ID starting with 88063a8ff2b7b4243c2fdca4322ee1f331e252dd7a707213dff68ad1b621f1cd not found: ID does not exist" Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.017231 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:24:29 crc kubenswrapper[4984]: I0130 10:24:29.021159 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-v2prt"] Jan 30 10:24:30 crc kubenswrapper[4984]: I0130 10:24:30.102451 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" path="/var/lib/kubelet/pods/6ca41dbd-8af6-43ac-af3d-b0cc6222264b/volumes" Jan 30 10:24:30 crc kubenswrapper[4984]: I0130 10:24:30.996812 4984 generic.go:334] "Generic (PLEG): container finished" podID="d796f450-1311-422f-9f63-324d0a624f15" containerID="632fe958e127244654dbcbdedf5df666fd52a8a3465d217c273e76221b35a0a3" exitCode=0 Jan 30 10:24:30 crc kubenswrapper[4984]: I0130 10:24:30.996889 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"632fe958e127244654dbcbdedf5df666fd52a8a3465d217c273e76221b35a0a3"} Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.149239 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:31 crc kubenswrapper[4984]: E0130 10:24:31.149585 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.149600 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.149747 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca41dbd-8af6-43ac-af3d-b0cc6222264b" containerName="console" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.150971 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.155982 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.257353 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.257747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.257855 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359061 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359120 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359603 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.359757 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.379450 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"redhat-operators-pkwf8\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.476393 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:31 crc kubenswrapper[4984]: I0130 10:24:31.707121 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.003600 4984 generic.go:334] "Generic (PLEG): container finished" podID="d796f450-1311-422f-9f63-324d0a624f15" containerID="a63b21f2008c73a509a1ca055dd42b3084bd6370b089ea5cdc11723406872d0a" exitCode=0 Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.003655 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"a63b21f2008c73a509a1ca055dd42b3084bd6370b089ea5cdc11723406872d0a"} Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.005122 4984 generic.go:334] "Generic (PLEG): container finished" podID="a0edede8-30cb-4add-9a06-830084c7c57b" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" exitCode=0 Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.005148 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef"} Jan 30 10:24:32 crc kubenswrapper[4984]: I0130 10:24:32.005161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerStarted","Data":"a552366bd60a088f98781cc480bddc1054116923b4637e5e3978f79539b7bf40"} Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.001091 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.001549 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.001639 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.002400 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.002497 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71" gracePeriod=600 Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.016355 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerStarted","Data":"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1"} Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.455524 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.590608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") pod \"d796f450-1311-422f-9f63-324d0a624f15\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.590716 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") pod \"d796f450-1311-422f-9f63-324d0a624f15\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.590757 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") pod \"d796f450-1311-422f-9f63-324d0a624f15\" (UID: \"d796f450-1311-422f-9f63-324d0a624f15\") " Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.592298 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle" (OuterVolumeSpecName: "bundle") pod "d796f450-1311-422f-9f63-324d0a624f15" (UID: "d796f450-1311-422f-9f63-324d0a624f15"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.599477 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j" (OuterVolumeSpecName: "kube-api-access-vn94j") pod "d796f450-1311-422f-9f63-324d0a624f15" (UID: "d796f450-1311-422f-9f63-324d0a624f15"). InnerVolumeSpecName "kube-api-access-vn94j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.604940 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util" (OuterVolumeSpecName: "util") pod "d796f450-1311-422f-9f63-324d0a624f15" (UID: "d796f450-1311-422f-9f63-324d0a624f15"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.692664 4984 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.692705 4984 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d796f450-1311-422f-9f63-324d0a624f15-util\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:33 crc kubenswrapper[4984]: I0130 10:24:33.692722 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn94j\" (UniqueName: \"kubernetes.io/projected/d796f450-1311-422f-9f63-324d0a624f15-kube-api-access-vn94j\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.027002 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" event={"ID":"d796f450-1311-422f-9f63-324d0a624f15","Type":"ContainerDied","Data":"5c1b15a6a18f98241e780ba14317fddd47db3e30c840e0aaf9dd2fb132e46c50"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.027521 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1b15a6a18f98241e780ba14317fddd47db3e30c840e0aaf9dd2fb132e46c50" Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.027037 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg" Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.029901 4984 generic.go:334] "Generic (PLEG): container finished" podID="a0edede8-30cb-4add-9a06-830084c7c57b" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" exitCode=0 Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.029979 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034293 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71" exitCode=0 Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034332 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034362 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e"} Jan 30 10:24:34 crc kubenswrapper[4984]: I0130 10:24:34.034381 4984 scope.go:117] "RemoveContainer" containerID="cbd8bf4911c039bab8c926015a64a5f4451e5cbbf549074c9aecbfc3f4884cf4" Jan 30 10:24:35 crc kubenswrapper[4984]: I0130 10:24:35.043295 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerStarted","Data":"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da"} Jan 30 10:24:35 crc kubenswrapper[4984]: I0130 10:24:35.064648 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkwf8" podStartSLOduration=1.296489039 podStartE2EDuration="4.064635489s" podCreationTimestamp="2026-01-30 10:24:31 +0000 UTC" firstStartedPulling="2026-01-30 10:24:32.006743769 +0000 UTC m=+776.573047583" lastFinishedPulling="2026-01-30 10:24:34.774890209 +0000 UTC m=+779.341194033" observedRunningTime="2026-01-30 10:24:35.062583881 +0000 UTC m=+779.628887725" watchObservedRunningTime="2026-01-30 10:24:35.064635489 +0000 UTC m=+779.630939313" Jan 30 10:24:41 crc kubenswrapper[4984]: I0130 10:24:41.476745 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:41 crc kubenswrapper[4984]: I0130 10:24:41.477288 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:41 crc kubenswrapper[4984]: I0130 10:24:41.553976 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:42 crc kubenswrapper[4984]: I0130 10:24:42.138736 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:42 crc kubenswrapper[4984]: I0130 10:24:42.739587 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791205 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk"] Jan 30 10:24:43 crc kubenswrapper[4984]: E0130 10:24:43.791662 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="pull" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791674 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="pull" Jan 30 10:24:43 crc kubenswrapper[4984]: E0130 10:24:43.791684 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="util" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791689 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="util" Jan 30 10:24:43 crc kubenswrapper[4984]: E0130 10:24:43.791696 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="extract" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791703 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="extract" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.791801 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d796f450-1311-422f-9f63-324d0a624f15" containerName="extract" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.792216 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794314 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794345 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794479 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wllvq" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.794841 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.797086 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.849950 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-apiservice-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.850046 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8cz4\" (UniqueName: \"kubernetes.io/projected/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-kube-api-access-s8cz4\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.850086 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-webhook-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.854544 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk"] Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.951273 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-webhook-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.951673 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-apiservice-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.951823 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8cz4\" (UniqueName: \"kubernetes.io/projected/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-kube-api-access-s8cz4\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.957369 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-webhook-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.957502 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-apiservice-cert\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:43 crc kubenswrapper[4984]: I0130 10:24:43.968428 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8cz4\" (UniqueName: \"kubernetes.io/projected/fb5cf2c1-4334-4aee-9f94-2f1c2797b484-kube-api-access-s8cz4\") pod \"metallb-operator-controller-manager-f5cdbcd49-plrfk\" (UID: \"fb5cf2c1-4334-4aee-9f94-2f1c2797b484\") " pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.101052 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkwf8" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" containerID="cri-o://785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" gracePeriod=2 Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.117455 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4"] Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.118122 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.118702 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.121129 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dr6xh" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.121169 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.121359 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.151581 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4"] Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.154894 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-apiservice-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.154946 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-webhook-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.154979 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btn5\" (UniqueName: \"kubernetes.io/projected/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-kube-api-access-5btn5\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.255560 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-apiservice-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.255613 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-webhook-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.255658 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btn5\" (UniqueName: \"kubernetes.io/projected/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-kube-api-access-5btn5\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.271264 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-webhook-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.274791 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-apiservice-cert\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.282008 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btn5\" (UniqueName: \"kubernetes.io/projected/b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05-kube-api-access-5btn5\") pod \"metallb-operator-webhook-server-86d4db4f7b-qz6m4\" (UID: \"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05\") " pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.483518 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.541407 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk"] Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.559731 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") pod \"a0edede8-30cb-4add-9a06-830084c7c57b\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.561157 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") pod \"a0edede8-30cb-4add-9a06-830084c7c57b\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.561232 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") pod \"a0edede8-30cb-4add-9a06-830084c7c57b\" (UID: \"a0edede8-30cb-4add-9a06-830084c7c57b\") " Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.561915 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities" (OuterVolumeSpecName: "utilities") pod "a0edede8-30cb-4add-9a06-830084c7c57b" (UID: "a0edede8-30cb-4add-9a06-830084c7c57b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.565179 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp" (OuterVolumeSpecName: "kube-api-access-w8znp") pod "a0edede8-30cb-4add-9a06-830084c7c57b" (UID: "a0edede8-30cb-4add-9a06-830084c7c57b"). InnerVolumeSpecName "kube-api-access-w8znp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.571490 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.662881 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8znp\" (UniqueName: \"kubernetes.io/projected/a0edede8-30cb-4add-9a06-830084c7c57b-kube-api-access-w8znp\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.662914 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.671603 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0edede8-30cb-4add-9a06-830084c7c57b" (UID: "a0edede8-30cb-4add-9a06-830084c7c57b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:24:44 crc kubenswrapper[4984]: I0130 10:24:44.766679 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0edede8-30cb-4add-9a06-830084c7c57b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.017906 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4"] Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.114729 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" event={"ID":"fb5cf2c1-4334-4aee-9f94-2f1c2797b484","Type":"ContainerStarted","Data":"78a0264aeda8cbf2c2c638183fa3a8774b559697e94e44af00c7b09fb492be40"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.119445 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" event={"ID":"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05","Type":"ContainerStarted","Data":"7016d895b04fe74310f5b513c915fe3094063d531d69d40350be412797189e23"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.122783 4984 generic.go:334] "Generic (PLEG): container finished" podID="a0edede8-30cb-4add-9a06-830084c7c57b" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" exitCode=0 Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.122917 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwf8" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.123036 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.123132 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwf8" event={"ID":"a0edede8-30cb-4add-9a06-830084c7c57b","Type":"ContainerDied","Data":"a552366bd60a088f98781cc480bddc1054116923b4637e5e3978f79539b7bf40"} Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.123185 4984 scope.go:117] "RemoveContainer" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.151910 4984 scope.go:117] "RemoveContainer" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.159882 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.163767 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkwf8"] Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.175855 4984 scope.go:117] "RemoveContainer" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.188346 4984 scope.go:117] "RemoveContainer" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" Jan 30 10:24:45 crc kubenswrapper[4984]: E0130 10:24:45.188804 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da\": container with ID starting with 785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da not found: ID does not exist" containerID="785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.188843 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da"} err="failed to get container status \"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da\": rpc error: code = NotFound desc = could not find container \"785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da\": container with ID starting with 785e15b0eb93fb8b14a0093aca78dfde25a10e868cc2435a906043d1061dd4da not found: ID does not exist" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.188870 4984 scope.go:117] "RemoveContainer" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" Jan 30 10:24:45 crc kubenswrapper[4984]: E0130 10:24:45.189228 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1\": container with ID starting with 47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1 not found: ID does not exist" containerID="47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.189274 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1"} err="failed to get container status \"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1\": rpc error: code = NotFound desc = could not find container \"47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1\": container with ID starting with 47a74400ac0c42fe6fd95c24fabccb0bf42c53d1f1b49b7350658ef0082279a1 not found: ID does not exist" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.189292 4984 scope.go:117] "RemoveContainer" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" Jan 30 10:24:45 crc kubenswrapper[4984]: E0130 10:24:45.189646 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef\": container with ID starting with 0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef not found: ID does not exist" containerID="0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef" Jan 30 10:24:45 crc kubenswrapper[4984]: I0130 10:24:45.189669 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef"} err="failed to get container status \"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef\": rpc error: code = NotFound desc = could not find container \"0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef\": container with ID starting with 0a3aeb6c33c7fba562a3be4dd78c6f279d202e141962d2fbd48b5e962766f8ef not found: ID does not exist" Jan 30 10:24:46 crc kubenswrapper[4984]: I0130 10:24:46.111114 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" path="/var/lib/kubelet/pods/a0edede8-30cb-4add-9a06-830084c7c57b/volumes" Jan 30 10:24:48 crc kubenswrapper[4984]: I0130 10:24:48.157367 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" event={"ID":"fb5cf2c1-4334-4aee-9f94-2f1c2797b484","Type":"ContainerStarted","Data":"c9d7cb153da5f792a55521dbeb275f836c6114611a4cf84e1eefcd63a9391730"} Jan 30 10:24:48 crc kubenswrapper[4984]: I0130 10:24:48.158582 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:24:48 crc kubenswrapper[4984]: I0130 10:24:48.185156 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" podStartSLOduration=2.312029655 podStartE2EDuration="5.18514091s" podCreationTimestamp="2026-01-30 10:24:43 +0000 UTC" firstStartedPulling="2026-01-30 10:24:44.54618209 +0000 UTC m=+789.112485914" lastFinishedPulling="2026-01-30 10:24:47.419293345 +0000 UTC m=+791.985597169" observedRunningTime="2026-01-30 10:24:48.180863244 +0000 UTC m=+792.747167068" watchObservedRunningTime="2026-01-30 10:24:48.18514091 +0000 UTC m=+792.751444724" Jan 30 10:24:51 crc kubenswrapper[4984]: I0130 10:24:51.180885 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" event={"ID":"b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05","Type":"ContainerStarted","Data":"b1c9e3ad29f0878d2568c3c76b1ee26e4c59f762a52161e077b3e9a4edc47c8b"} Jan 30 10:24:51 crc kubenswrapper[4984]: I0130 10:24:51.182608 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:24:51 crc kubenswrapper[4984]: I0130 10:24:51.212468 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" podStartSLOduration=2.137636963 podStartE2EDuration="7.212445086s" podCreationTimestamp="2026-01-30 10:24:44 +0000 UTC" firstStartedPulling="2026-01-30 10:24:45.048725987 +0000 UTC m=+789.615029811" lastFinishedPulling="2026-01-30 10:24:50.1235341 +0000 UTC m=+794.689837934" observedRunningTime="2026-01-30 10:24:51.207521643 +0000 UTC m=+795.773825487" watchObservedRunningTime="2026-01-30 10:24:51.212445086 +0000 UTC m=+795.778748930" Jan 30 10:25:04 crc kubenswrapper[4984]: I0130 10:25:04.580834 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86d4db4f7b-qz6m4" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.122881 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-f5cdbcd49-plrfk" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948469 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z7vlt"] Jan 30 10:25:24 crc kubenswrapper[4984]: E0130 10:25:24.948705 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-content" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948720 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-content" Jan 30 10:25:24 crc kubenswrapper[4984]: E0130 10:25:24.948734 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948741 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" Jan 30 10:25:24 crc kubenswrapper[4984]: E0130 10:25:24.948762 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-utilities" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948771 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="extract-utilities" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.948884 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0edede8-30cb-4add-9a06-830084c7c57b" containerName="registry-server" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.951143 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.953025 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.953370 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.953544 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gkwrr" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.973833 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw"] Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.974632 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.976575 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 10:25:24 crc kubenswrapper[4984]: I0130 10:25:24.996981 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6gbr\" (UniqueName: \"kubernetes.io/projected/997946ae-eb76-422f-9954-d9dae3ca8184-kube-api-access-k6gbr\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037419 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-conf\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037451 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e54bb11-7cfb-4840-b861-bd6d184c36f4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037477 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-reloader\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037541 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-metrics\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037579 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-sockets\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037603 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037678 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5pdf\" (UniqueName: \"kubernetes.io/projected/7e54bb11-7cfb-4840-b861-bd6d184c36f4-kube-api-access-x5pdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.037704 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/997946ae-eb76-422f-9954-d9dae3ca8184-frr-startup\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.073890 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wc8c7"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.074795 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.077412 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-4tngn"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.078311 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079325 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079352 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079396 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.079352 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.081060 4984 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-czq8s" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.084301 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4tngn"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.141956 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6gbr\" (UniqueName: \"kubernetes.io/projected/997946ae-eb76-422f-9954-d9dae3ca8184-kube-api-access-k6gbr\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142325 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-conf\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142358 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e54bb11-7cfb-4840-b861-bd6d184c36f4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142403 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-reloader\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142531 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142764 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-conf\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-reloader\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142885 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07684256-0759-426a-9ba0-40514aa3e7ac-metallb-excludel2\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142912 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-755js\" (UniqueName: \"kubernetes.io/projected/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-kube-api-access-755js\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.142963 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-metrics\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143048 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-metrics-certs\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143078 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-sockets\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143105 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.143197 4984 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.143261 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs podName:997946ae-eb76-422f-9954-d9dae3ca8184 nodeName:}" failed. No retries permitted until 2026-01-30 10:25:25.643226747 +0000 UTC m=+830.209530571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs") pod "frr-k8s-z7vlt" (UID: "997946ae-eb76-422f-9954-d9dae3ca8184") : secret "frr-k8s-certs-secret" not found Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143197 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5pdf\" (UniqueName: \"kubernetes.io/projected/7e54bb11-7cfb-4840-b861-bd6d184c36f4-kube-api-access-x5pdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143297 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/997946ae-eb76-422f-9954-d9dae3ca8184-frr-startup\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143324 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-cert\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143347 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-metrics-certs\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmvg\" (UniqueName: \"kubernetes.io/projected/07684256-0759-426a-9ba0-40514aa3e7ac-kube-api-access-dmmvg\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143612 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-metrics\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.143641 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/997946ae-eb76-422f-9954-d9dae3ca8184-frr-sockets\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.144340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/997946ae-eb76-422f-9954-d9dae3ca8184-frr-startup\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.163068 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6gbr\" (UniqueName: \"kubernetes.io/projected/997946ae-eb76-422f-9954-d9dae3ca8184-kube-api-access-k6gbr\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.163720 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e54bb11-7cfb-4840-b861-bd6d184c36f4-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.165236 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5pdf\" (UniqueName: \"kubernetes.io/projected/7e54bb11-7cfb-4840-b861-bd6d184c36f4-kube-api-access-x5pdf\") pod \"frr-k8s-webhook-server-7df86c4f6c-j62qw\" (UID: \"7e54bb11-7cfb-4840-b861-bd6d184c36f4\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmvg\" (UniqueName: \"kubernetes.io/projected/07684256-0759-426a-9ba0-40514aa3e7ac-kube-api-access-dmmvg\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244129 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244152 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07684256-0759-426a-9ba0-40514aa3e7ac-metallb-excludel2\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244184 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-755js\" (UniqueName: \"kubernetes.io/projected/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-kube-api-access-755js\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-metrics-certs\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244280 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-cert\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.244295 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-metrics-certs\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.244700 4984 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.244793 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist podName:07684256-0759-426a-9ba0-40514aa3e7ac nodeName:}" failed. No retries permitted until 2026-01-30 10:25:25.744768821 +0000 UTC m=+830.311072705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist") pod "speaker-wc8c7" (UID: "07684256-0759-426a-9ba0-40514aa3e7ac") : secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.245316 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07684256-0759-426a-9ba0-40514aa3e7ac-metallb-excludel2\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.248386 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-cert\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.255749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-metrics-certs\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.256747 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-metrics-certs\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.259434 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmvg\" (UniqueName: \"kubernetes.io/projected/07684256-0759-426a-9ba0-40514aa3e7ac-kube-api-access-dmmvg\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.265962 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-755js\" (UniqueName: \"kubernetes.io/projected/2ae05bf6-d99c-4fb1-9780-20249ec78e1e-kube-api-access-755js\") pod \"controller-6968d8fdc4-4tngn\" (UID: \"2ae05bf6-d99c-4fb1-9780-20249ec78e1e\") " pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.288798 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.486964 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw"] Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.491388 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.648618 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.653548 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/997946ae-eb76-422f-9954-d9dae3ca8184-metrics-certs\") pod \"frr-k8s-z7vlt\" (UID: \"997946ae-eb76-422f-9954-d9dae3ca8184\") " pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.672377 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4tngn"] Jan 30 10:25:25 crc kubenswrapper[4984]: W0130 10:25:25.680864 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ae05bf6_d99c_4fb1_9780_20249ec78e1e.slice/crio-e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554 WatchSource:0}: Error finding container e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554: Status 404 returned error can't find the container with id e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554 Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.749377 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.749613 4984 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: E0130 10:25:25.749673 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist podName:07684256-0759-426a-9ba0-40514aa3e7ac nodeName:}" failed. No retries permitted until 2026-01-30 10:25:26.749657251 +0000 UTC m=+831.315961085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist") pod "speaker-wc8c7" (UID: "07684256-0759-426a-9ba0-40514aa3e7ac") : secret "metallb-memberlist" not found Jan 30 10:25:25 crc kubenswrapper[4984]: I0130 10:25:25.868331 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.412944 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"7a089b9f7e74c897b0a73d5388e438e3794a31cab9612c5cf8edf519b984546c"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.414967 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" event={"ID":"7e54bb11-7cfb-4840-b861-bd6d184c36f4","Type":"ContainerStarted","Data":"58e004b6ed388421432c0776f6f27705d25295c834b84e458409dc8ca7bcd3ff"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416602 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4tngn" event={"ID":"2ae05bf6-d99c-4fb1-9780-20249ec78e1e","Type":"ContainerStarted","Data":"7f6fe00cbaf737c0f731065c567bbe878a8bf4d5bdef0a143440acd1fe1daf23"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4tngn" event={"ID":"2ae05bf6-d99c-4fb1-9780-20249ec78e1e","Type":"ContainerStarted","Data":"7610d1a071e562b3a42e48c46355e5d7df1566783312bfa1e11e3d093e53e2d4"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416647 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4tngn" event={"ID":"2ae05bf6-d99c-4fb1-9780-20249ec78e1e","Type":"ContainerStarted","Data":"e575a46b339899c2fb7a4c9c8e777906d63da54e5559550fa1b40682739cc554"} Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.416808 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.448697 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-4tngn" podStartSLOduration=1.448669375 podStartE2EDuration="1.448669375s" podCreationTimestamp="2026-01-30 10:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:25:26.440219696 +0000 UTC m=+831.006523530" watchObservedRunningTime="2026-01-30 10:25:26.448669375 +0000 UTC m=+831.014973239" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.763268 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.770005 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07684256-0759-426a-9ba0-40514aa3e7ac-memberlist\") pod \"speaker-wc8c7\" (UID: \"07684256-0759-426a-9ba0-40514aa3e7ac\") " pod="metallb-system/speaker-wc8c7" Jan 30 10:25:26 crc kubenswrapper[4984]: I0130 10:25:26.948877 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:27 crc kubenswrapper[4984]: I0130 10:25:27.434375 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc8c7" event={"ID":"07684256-0759-426a-9ba0-40514aa3e7ac","Type":"ContainerStarted","Data":"9f977a546441f73297e073da518ebb9c4bf6a4f48b630b6b222f2b2c46a10047"} Jan 30 10:25:27 crc kubenswrapper[4984]: I0130 10:25:27.434410 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc8c7" event={"ID":"07684256-0759-426a-9ba0-40514aa3e7ac","Type":"ContainerStarted","Data":"de15cff346b174cacc26d1ce8dfcbb6ab999dc4665410ef5c5dc4df94dbcbb8e"} Jan 30 10:25:28 crc kubenswrapper[4984]: I0130 10:25:28.449397 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wc8c7" event={"ID":"07684256-0759-426a-9ba0-40514aa3e7ac","Type":"ContainerStarted","Data":"96fe113673ee6948d85f2d9469976880f5f0a6c26078f90e457ae0e095f6dc53"} Jan 30 10:25:28 crc kubenswrapper[4984]: I0130 10:25:28.449871 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:28 crc kubenswrapper[4984]: I0130 10:25:28.471071 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wc8c7" podStartSLOduration=3.471047651 podStartE2EDuration="3.471047651s" podCreationTimestamp="2026-01-30 10:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:25:28.469604702 +0000 UTC m=+833.035908536" watchObservedRunningTime="2026-01-30 10:25:28.471047651 +0000 UTC m=+833.037351475" Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.481560 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" event={"ID":"7e54bb11-7cfb-4840-b861-bd6d184c36f4","Type":"ContainerStarted","Data":"060f3efffab5f8e3e74e6284f9bfc38965e01dbd990aa488e26f3bde4989e5fd"} Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.482118 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.482780 4984 generic.go:334] "Generic (PLEG): container finished" podID="997946ae-eb76-422f-9954-d9dae3ca8184" containerID="46e8f02958a2a8983a48d8c28e1e83c1cc2948423bbc66e4e6d490013fd69616" exitCode=0 Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.482803 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerDied","Data":"46e8f02958a2a8983a48d8c28e1e83c1cc2948423bbc66e4e6d490013fd69616"} Jan 30 10:25:33 crc kubenswrapper[4984]: I0130 10:25:33.498218 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" podStartSLOduration=1.8635116489999999 podStartE2EDuration="9.498202873s" podCreationTimestamp="2026-01-30 10:25:24 +0000 UTC" firstStartedPulling="2026-01-30 10:25:25.501085131 +0000 UTC m=+830.067388955" lastFinishedPulling="2026-01-30 10:25:33.135776365 +0000 UTC m=+837.702080179" observedRunningTime="2026-01-30 10:25:33.496481326 +0000 UTC m=+838.062785150" watchObservedRunningTime="2026-01-30 10:25:33.498202873 +0000 UTC m=+838.064506697" Jan 30 10:25:34 crc kubenswrapper[4984]: I0130 10:25:34.490085 4984 generic.go:334] "Generic (PLEG): container finished" podID="997946ae-eb76-422f-9954-d9dae3ca8184" containerID="0a6702565104b0cce46eedcc9d14ad5cc67bdf5bf4b717e8689bbff7af838598" exitCode=0 Jan 30 10:25:34 crc kubenswrapper[4984]: I0130 10:25:34.490171 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerDied","Data":"0a6702565104b0cce46eedcc9d14ad5cc67bdf5bf4b717e8689bbff7af838598"} Jan 30 10:25:35 crc kubenswrapper[4984]: I0130 10:25:35.496057 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-4tngn" Jan 30 10:25:35 crc kubenswrapper[4984]: I0130 10:25:35.501749 4984 generic.go:334] "Generic (PLEG): container finished" podID="997946ae-eb76-422f-9954-d9dae3ca8184" containerID="ffe6d4d10770c846f664d56d694daf14cf564872514e6f4d91147c082b73bb71" exitCode=0 Jan 30 10:25:35 crc kubenswrapper[4984]: I0130 10:25:35.501800 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerDied","Data":"ffe6d4d10770c846f664d56d694daf14cf564872514e6f4d91147c082b73bb71"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511459 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"7fe4222c13517cb85341dcab25b4281c9ea010c138f0355bb412237459bc5ccd"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511823 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"9795cf593ee73066007d0e4bb5869bd88b22d5d2c6081171c3b77f4fb3e815c4"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511837 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"7f89ec3198fc7bddcb2c991fdd70deeb6373ebec08b366cde22fb079f6a9275b"} Jan 30 10:25:36 crc kubenswrapper[4984]: I0130 10:25:36.511847 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"f914d87a23774e7d017510e56064082fbf680e91f94aac36037096ed2b2bc88a"} Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.535525 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"2c4068053574c030f20690a866a7a73aabf0715254899897c3146aab79b13864"} Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.535960 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z7vlt" event={"ID":"997946ae-eb76-422f-9954-d9dae3ca8184","Type":"ContainerStarted","Data":"caf74a790daf2528309bd05f398d4061b6b673097f654ca1e2625f92790bfefc"} Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.536473 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:37 crc kubenswrapper[4984]: I0130 10:25:37.577562 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z7vlt" podStartSLOduration=6.41679198 podStartE2EDuration="13.577545804s" podCreationTimestamp="2026-01-30 10:25:24 +0000 UTC" firstStartedPulling="2026-01-30 10:25:25.969679527 +0000 UTC m=+830.535983351" lastFinishedPulling="2026-01-30 10:25:33.130433331 +0000 UTC m=+837.696737175" observedRunningTime="2026-01-30 10:25:37.573034282 +0000 UTC m=+842.139338186" watchObservedRunningTime="2026-01-30 10:25:37.577545804 +0000 UTC m=+842.143849638" Jan 30 10:25:40 crc kubenswrapper[4984]: I0130 10:25:40.869579 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:40 crc kubenswrapper[4984]: I0130 10:25:40.940105 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:45 crc kubenswrapper[4984]: I0130 10:25:45.295101 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-j62qw" Jan 30 10:25:45 crc kubenswrapper[4984]: I0130 10:25:45.875761 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z7vlt" Jan 30 10:25:46 crc kubenswrapper[4984]: I0130 10:25:46.953101 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wc8c7" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.030323 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.031748 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.036355 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.036496 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7274q" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.041346 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.047612 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.213548 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"openstack-operator-index-npz6v\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.315112 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"openstack-operator-index-npz6v\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.337013 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"openstack-operator-index-npz6v\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.356697 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.589240 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:50 crc kubenswrapper[4984]: W0130 10:25:50.602077 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod215dcee8_cadb_424f_98c5_ee7ebcf45d3a.slice/crio-c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f WatchSource:0}: Error finding container c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f: Status 404 returned error can't find the container with id c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f Jan 30 10:25:50 crc kubenswrapper[4984]: I0130 10:25:50.630138 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerStarted","Data":"c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f"} Jan 30 10:25:53 crc kubenswrapper[4984]: I0130 10:25:53.410438 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:53 crc kubenswrapper[4984]: I0130 10:25:53.656608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerStarted","Data":"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041"} Jan 30 10:25:53 crc kubenswrapper[4984]: I0130 10:25:53.677496 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-npz6v" podStartSLOduration=1.433583782 podStartE2EDuration="3.67743793s" podCreationTimestamp="2026-01-30 10:25:50 +0000 UTC" firstStartedPulling="2026-01-30 10:25:50.607411754 +0000 UTC m=+855.173715578" lastFinishedPulling="2026-01-30 10:25:52.851265892 +0000 UTC m=+857.417569726" observedRunningTime="2026-01-30 10:25:53.673698052 +0000 UTC m=+858.240001876" watchObservedRunningTime="2026-01-30 10:25:53.67743793 +0000 UTC m=+858.243741764" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.024846 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nqgjv"] Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.026991 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.037003 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nqgjv"] Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.083045 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58jxf\" (UniqueName: \"kubernetes.io/projected/be54871d-c3f5-40bc-b6cd-63602755ca51-kube-api-access-58jxf\") pod \"openstack-operator-index-nqgjv\" (UID: \"be54871d-c3f5-40bc-b6cd-63602755ca51\") " pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.185591 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58jxf\" (UniqueName: \"kubernetes.io/projected/be54871d-c3f5-40bc-b6cd-63602755ca51-kube-api-access-58jxf\") pod \"openstack-operator-index-nqgjv\" (UID: \"be54871d-c3f5-40bc-b6cd-63602755ca51\") " pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.211053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58jxf\" (UniqueName: \"kubernetes.io/projected/be54871d-c3f5-40bc-b6cd-63602755ca51-kube-api-access-58jxf\") pod \"openstack-operator-index-nqgjv\" (UID: \"be54871d-c3f5-40bc-b6cd-63602755ca51\") " pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.364056 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.663507 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-npz6v" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" containerID="cri-o://e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" gracePeriod=2 Jan 30 10:25:54 crc kubenswrapper[4984]: I0130 10:25:54.838936 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nqgjv"] Jan 30 10:25:54 crc kubenswrapper[4984]: W0130 10:25:54.840179 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe54871d_c3f5_40bc_b6cd_63602755ca51.slice/crio-1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d WatchSource:0}: Error finding container 1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d: Status 404 returned error can't find the container with id 1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.026822 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.098420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") pod \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\" (UID: \"215dcee8-cadb-424f-98c5-ee7ebcf45d3a\") " Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.106184 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc" (OuterVolumeSpecName: "kube-api-access-2wgjc") pod "215dcee8-cadb-424f-98c5-ee7ebcf45d3a" (UID: "215dcee8-cadb-424f-98c5-ee7ebcf45d3a"). InnerVolumeSpecName "kube-api-access-2wgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.200921 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgjc\" (UniqueName: \"kubernetes.io/projected/215dcee8-cadb-424f-98c5-ee7ebcf45d3a-kube-api-access-2wgjc\") on node \"crc\" DevicePath \"\"" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.672875 4984 generic.go:334] "Generic (PLEG): container finished" podID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" exitCode=0 Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.672988 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npz6v" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.672982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerDied","Data":"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.673186 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npz6v" event={"ID":"215dcee8-cadb-424f-98c5-ee7ebcf45d3a","Type":"ContainerDied","Data":"c100ec844949a9cf8249a189ada3b5db9ba5e5c4c77e9abcdce2eb0e67567b7f"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.673229 4984 scope.go:117] "RemoveContainer" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.675033 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nqgjv" event={"ID":"be54871d-c3f5-40bc-b6cd-63602755ca51","Type":"ContainerStarted","Data":"77909d946bdbcbc1723a2e7efd64b09a8447e749ff9b1ccedd9874d920cf3f82"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.675099 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nqgjv" event={"ID":"be54871d-c3f5-40bc-b6cd-63602755ca51","Type":"ContainerStarted","Data":"1c972b9aa2521f9c5121cdaa9d0cefcc232c83f6fe5bfc33821ed9d78dc3d66d"} Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.703286 4984 scope.go:117] "RemoveContainer" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" Jan 30 10:25:55 crc kubenswrapper[4984]: E0130 10:25:55.705239 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041\": container with ID starting with e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041 not found: ID does not exist" containerID="e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.705477 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041"} err="failed to get container status \"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041\": rpc error: code = NotFound desc = could not find container \"e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041\": container with ID starting with e449dc0f2d41fa72db9d8b9ddebc98c167fa3f19e24be2ea842f015db229b041 not found: ID does not exist" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.719235 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nqgjv" podStartSLOduration=1.652136274 podStartE2EDuration="1.71921515s" podCreationTimestamp="2026-01-30 10:25:54 +0000 UTC" firstStartedPulling="2026-01-30 10:25:54.847234465 +0000 UTC m=+859.413538289" lastFinishedPulling="2026-01-30 10:25:54.914313331 +0000 UTC m=+859.480617165" observedRunningTime="2026-01-30 10:25:55.704709068 +0000 UTC m=+860.271012922" watchObservedRunningTime="2026-01-30 10:25:55.71921515 +0000 UTC m=+860.285518974" Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.721722 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:55 crc kubenswrapper[4984]: I0130 10:25:55.726769 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-npz6v"] Jan 30 10:25:56 crc kubenswrapper[4984]: I0130 10:25:56.104443 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" path="/var/lib/kubelet/pods/215dcee8-cadb-424f-98c5-ee7ebcf45d3a/volumes" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.364513 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.365015 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.401540 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:04 crc kubenswrapper[4984]: I0130 10:26:04.788930 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nqgjv" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.377915 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw"] Jan 30 10:26:11 crc kubenswrapper[4984]: E0130 10:26:11.378870 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.378888 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.379053 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="215dcee8-cadb-424f-98c5-ee7ebcf45d3a" containerName="registry-server" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.380467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.424619 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw"] Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.424895 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jq7m4" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.439396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.439444 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.439496 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.540916 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541021 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541047 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541807 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.541858 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.565558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:11 crc kubenswrapper[4984]: I0130 10:26:11.736474 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.025668 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw"] Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.842488 4984 generic.go:334] "Generic (PLEG): container finished" podID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerID="ba83d11e0beab8264fb47a35d00a82c25ea68f771b87cc983330039260a2defc" exitCode=0 Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.842548 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"ba83d11e0beab8264fb47a35d00a82c25ea68f771b87cc983330039260a2defc"} Jan 30 10:26:12 crc kubenswrapper[4984]: I0130 10:26:12.842946 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerStarted","Data":"8e94377a18d5da81d679783bfd83611dd6a34e5dfaf77dfc19648d278742cc3f"} Jan 30 10:26:13 crc kubenswrapper[4984]: I0130 10:26:13.853848 4984 generic.go:334] "Generic (PLEG): container finished" podID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerID="74f318d1b9c26527168de678b10eef5ee23b24fcedab32a1f391fc17580575fd" exitCode=0 Jan 30 10:26:13 crc kubenswrapper[4984]: I0130 10:26:13.853920 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"74f318d1b9c26527168de678b10eef5ee23b24fcedab32a1f391fc17580575fd"} Jan 30 10:26:14 crc kubenswrapper[4984]: I0130 10:26:14.864944 4984 generic.go:334] "Generic (PLEG): container finished" podID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerID="9fbbaf825a8cfb2501a30b220334bf1a4eb09724fcc91ea6eecc0dab8e85af46" exitCode=0 Jan 30 10:26:14 crc kubenswrapper[4984]: I0130 10:26:14.864996 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"9fbbaf825a8cfb2501a30b220334bf1a4eb09724fcc91ea6eecc0dab8e85af46"} Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.213315 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.412039 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") pod \"66ab9762-201b-40f3-8d9b-1d114a7d778e\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.412641 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") pod \"66ab9762-201b-40f3-8d9b-1d114a7d778e\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.413043 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") pod \"66ab9762-201b-40f3-8d9b-1d114a7d778e\" (UID: \"66ab9762-201b-40f3-8d9b-1d114a7d778e\") " Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.413660 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle" (OuterVolumeSpecName: "bundle") pod "66ab9762-201b-40f3-8d9b-1d114a7d778e" (UID: "66ab9762-201b-40f3-8d9b-1d114a7d778e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.414033 4984 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.425596 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd" (OuterVolumeSpecName: "kube-api-access-2qpbd") pod "66ab9762-201b-40f3-8d9b-1d114a7d778e" (UID: "66ab9762-201b-40f3-8d9b-1d114a7d778e"). InnerVolumeSpecName "kube-api-access-2qpbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.425910 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util" (OuterVolumeSpecName: "util") pod "66ab9762-201b-40f3-8d9b-1d114a7d778e" (UID: "66ab9762-201b-40f3-8d9b-1d114a7d778e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.515356 4984 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66ab9762-201b-40f3-8d9b-1d114a7d778e-util\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.515402 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qpbd\" (UniqueName: \"kubernetes.io/projected/66ab9762-201b-40f3-8d9b-1d114a7d778e-kube-api-access-2qpbd\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.890279 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" event={"ID":"66ab9762-201b-40f3-8d9b-1d114a7d778e","Type":"ContainerDied","Data":"8e94377a18d5da81d679783bfd83611dd6a34e5dfaf77dfc19648d278742cc3f"} Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.890351 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e94377a18d5da81d679783bfd83611dd6a34e5dfaf77dfc19648d278742cc3f" Jan 30 10:26:16 crc kubenswrapper[4984]: I0130 10:26:16.890369 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.050977 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69"] Jan 30 10:26:24 crc kubenswrapper[4984]: E0130 10:26:24.053001 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="util" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053104 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="util" Jan 30 10:26:24 crc kubenswrapper[4984]: E0130 10:26:24.053187 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="extract" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053284 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="extract" Jan 30 10:26:24 crc kubenswrapper[4984]: E0130 10:26:24.053398 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="pull" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053510 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="pull" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.053744 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ab9762-201b-40f3-8d9b-1d114a7d778e" containerName="extract" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.054383 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.056879 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-x8fxc" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.144778 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxl5p\" (UniqueName: \"kubernetes.io/projected/f4b80c7c-3e81-48d4-862c-684369655891-kube-api-access-vxl5p\") pod \"openstack-operator-controller-init-7d4ff8bbbc-68r69\" (UID: \"f4b80c7c-3e81-48d4-862c-684369655891\") " pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.163473 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69"] Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.247483 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxl5p\" (UniqueName: \"kubernetes.io/projected/f4b80c7c-3e81-48d4-862c-684369655891-kube-api-access-vxl5p\") pod \"openstack-operator-controller-init-7d4ff8bbbc-68r69\" (UID: \"f4b80c7c-3e81-48d4-862c-684369655891\") " pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.264759 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxl5p\" (UniqueName: \"kubernetes.io/projected/f4b80c7c-3e81-48d4-862c-684369655891-kube-api-access-vxl5p\") pod \"openstack-operator-controller-init-7d4ff8bbbc-68r69\" (UID: \"f4b80c7c-3e81-48d4-862c-684369655891\") " pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.370689 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.797928 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69"] Jan 30 10:26:24 crc kubenswrapper[4984]: I0130 10:26:24.949497 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" event={"ID":"f4b80c7c-3e81-48d4-862c-684369655891","Type":"ContainerStarted","Data":"88eaf4544290c8cbfff907a2f75cf49f33c40a7f119651cacf639a099c1510df"} Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.056877 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.058317 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.074501 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.074564 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.075176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.079323 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176050 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176150 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176647 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.176666 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.198470 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"community-operators-94xpf\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:26 crc kubenswrapper[4984]: I0130 10:26:26.378064 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.741486 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:28 crc kubenswrapper[4984]: W0130 10:26:28.742751 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9468373_fc0f_4d5e_85c0_4d09686d9b9a.slice/crio-40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64 WatchSource:0}: Error finding container 40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64: Status 404 returned error can't find the container with id 40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64 Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.986167 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" event={"ID":"f4b80c7c-3e81-48d4-862c-684369655891","Type":"ContainerStarted","Data":"4b0159011825f1cd752eacb10e8b694aabb94c7c7ed94ff52e434800845798ae"} Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.986311 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.988471 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerID="b9c29423280037e67829c95b4f1cefcf040741990711bb2b73a1e87ab606013f" exitCode=0 Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.988507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"b9c29423280037e67829c95b4f1cefcf040741990711bb2b73a1e87ab606013f"} Jan 30 10:26:28 crc kubenswrapper[4984]: I0130 10:26:28.988536 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerStarted","Data":"40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64"} Jan 30 10:26:29 crc kubenswrapper[4984]: I0130 10:26:29.043732 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" podStartSLOduration=1.197445472 podStartE2EDuration="5.043710312s" podCreationTimestamp="2026-01-30 10:26:24 +0000 UTC" firstStartedPulling="2026-01-30 10:26:24.808506863 +0000 UTC m=+889.374810697" lastFinishedPulling="2026-01-30 10:26:28.654771713 +0000 UTC m=+893.221075537" observedRunningTime="2026-01-30 10:26:29.033790691 +0000 UTC m=+893.600094515" watchObservedRunningTime="2026-01-30 10:26:29.043710312 +0000 UTC m=+893.610014136" Jan 30 10:26:30 crc kubenswrapper[4984]: I0130 10:26:29.999698 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerStarted","Data":"f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f"} Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.008636 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerID="f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f" exitCode=0 Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.008692 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f"} Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.009032 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerStarted","Data":"88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c"} Jan 30 10:26:31 crc kubenswrapper[4984]: I0130 10:26:31.035040 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94xpf" podStartSLOduration=3.613010908 podStartE2EDuration="5.035014012s" podCreationTimestamp="2026-01-30 10:26:26 +0000 UTC" firstStartedPulling="2026-01-30 10:26:28.990499571 +0000 UTC m=+893.556803395" lastFinishedPulling="2026-01-30 10:26:30.412502645 +0000 UTC m=+894.978806499" observedRunningTime="2026-01-30 10:26:31.031082689 +0000 UTC m=+895.597386513" watchObservedRunningTime="2026-01-30 10:26:31.035014012 +0000 UTC m=+895.601317836" Jan 30 10:26:33 crc kubenswrapper[4984]: I0130 10:26:33.001150 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:26:33 crc kubenswrapper[4984]: I0130 10:26:33.001218 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:26:34 crc kubenswrapper[4984]: I0130 10:26:34.375061 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7d4ff8bbbc-68r69" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.065007 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.066600 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.128332 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.223074 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.223805 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.224017 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.324922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325047 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325081 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325427 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.325478 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.346674 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"redhat-marketplace-l5j2l\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.379232 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.379294 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.400546 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.426154 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:36 crc kubenswrapper[4984]: I0130 10:26:36.610584 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.051288 4984 generic.go:334] "Generic (PLEG): container finished" podID="7be10507-f755-4ddf-8a9c-4699573ac179" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" exitCode=0 Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.051380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184"} Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.051422 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerStarted","Data":"a1e87647fa48ca5a25ffd87302b4d82f9faa355a900d689f8b5ed3691791996c"} Jan 30 10:26:37 crc kubenswrapper[4984]: I0130 10:26:37.102481 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:38 crc kubenswrapper[4984]: I0130 10:26:38.060161 4984 generic.go:334] "Generic (PLEG): container finished" podID="7be10507-f755-4ddf-8a9c-4699573ac179" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" exitCode=0 Jan 30 10:26:38 crc kubenswrapper[4984]: I0130 10:26:38.060222 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51"} Jan 30 10:26:39 crc kubenswrapper[4984]: I0130 10:26:39.070294 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerStarted","Data":"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965"} Jan 30 10:26:39 crc kubenswrapper[4984]: I0130 10:26:39.098397 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5j2l" podStartSLOduration=1.471152838 podStartE2EDuration="3.098376134s" podCreationTimestamp="2026-01-30 10:26:36 +0000 UTC" firstStartedPulling="2026-01-30 10:26:37.05373124 +0000 UTC m=+901.620035064" lastFinishedPulling="2026-01-30 10:26:38.680954496 +0000 UTC m=+903.247258360" observedRunningTime="2026-01-30 10:26:39.097727057 +0000 UTC m=+903.664030881" watchObservedRunningTime="2026-01-30 10:26:39.098376134 +0000 UTC m=+903.664679978" Jan 30 10:26:40 crc kubenswrapper[4984]: I0130 10:26:40.050380 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:40 crc kubenswrapper[4984]: I0130 10:26:40.050646 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94xpf" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" containerID="cri-o://88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c" gracePeriod=2 Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.092027 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerID="88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c" exitCode=0 Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.092916 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c"} Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.785486 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.920971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") pod \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.921037 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") pod \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.921061 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") pod \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\" (UID: \"e9468373-fc0f-4d5e-85c0-4d09686d9b9a\") " Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.923196 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities" (OuterVolumeSpecName: "utilities") pod "e9468373-fc0f-4d5e-85c0-4d09686d9b9a" (UID: "e9468373-fc0f-4d5e-85c0-4d09686d9b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.930657 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27" (OuterVolumeSpecName: "kube-api-access-mtc27") pod "e9468373-fc0f-4d5e-85c0-4d09686d9b9a" (UID: "e9468373-fc0f-4d5e-85c0-4d09686d9b9a"). InnerVolumeSpecName "kube-api-access-mtc27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:26:41 crc kubenswrapper[4984]: I0130 10:26:41.972883 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9468373-fc0f-4d5e-85c0-4d09686d9b9a" (UID: "e9468373-fc0f-4d5e-85c0-4d09686d9b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.024724 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtc27\" (UniqueName: \"kubernetes.io/projected/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-kube-api-access-mtc27\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.024793 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.024820 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9468373-fc0f-4d5e-85c0-4d09686d9b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.106481 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94xpf" event={"ID":"e9468373-fc0f-4d5e-85c0-4d09686d9b9a","Type":"ContainerDied","Data":"40383bdd93676975ce760bfe8a6eaec762a0f7c4baa7d06f527340167175df64"} Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.108874 4984 scope.go:117] "RemoveContainer" containerID="88b966a75a0668a115d86a79a66470d2b8325210f25d7f321e8c40855b9a7c8c" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.106540 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94xpf" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.146376 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.152493 4984 scope.go:117] "RemoveContainer" containerID="f534a39231d8e7ac8106fc3f539793faee62238d1a933015bf8e9a34d6ce027f" Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.160200 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94xpf"] Jan 30 10:26:42 crc kubenswrapper[4984]: I0130 10:26:42.176834 4984 scope.go:117] "RemoveContainer" containerID="b9c29423280037e67829c95b4f1cefcf040741990711bb2b73a1e87ab606013f" Jan 30 10:26:44 crc kubenswrapper[4984]: I0130 10:26:44.104444 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" path="/var/lib/kubelet/pods/e9468373-fc0f-4d5e-85c0-4d09686d9b9a/volumes" Jan 30 10:26:46 crc kubenswrapper[4984]: I0130 10:26:46.401145 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:46 crc kubenswrapper[4984]: I0130 10:26:46.401645 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:46 crc kubenswrapper[4984]: I0130 10:26:46.463859 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:47 crc kubenswrapper[4984]: I0130 10:26:47.216094 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:48 crc kubenswrapper[4984]: I0130 10:26:48.254221 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.160462 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5j2l" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" containerID="cri-o://8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" gracePeriod=2 Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.553863 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.740654 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") pod \"7be10507-f755-4ddf-8a9c-4699573ac179\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.740732 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") pod \"7be10507-f755-4ddf-8a9c-4699573ac179\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.740845 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") pod \"7be10507-f755-4ddf-8a9c-4699573ac179\" (UID: \"7be10507-f755-4ddf-8a9c-4699573ac179\") " Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.742679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities" (OuterVolumeSpecName: "utilities") pod "7be10507-f755-4ddf-8a9c-4699573ac179" (UID: "7be10507-f755-4ddf-8a9c-4699573ac179"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.750611 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd" (OuterVolumeSpecName: "kube-api-access-vzfnd") pod "7be10507-f755-4ddf-8a9c-4699573ac179" (UID: "7be10507-f755-4ddf-8a9c-4699573ac179"). InnerVolumeSpecName "kube-api-access-vzfnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.764584 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7be10507-f755-4ddf-8a9c-4699573ac179" (UID: "7be10507-f755-4ddf-8a9c-4699573ac179"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.842688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzfnd\" (UniqueName: \"kubernetes.io/projected/7be10507-f755-4ddf-8a9c-4699573ac179-kube-api-access-vzfnd\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.842766 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:49 crc kubenswrapper[4984]: I0130 10:26:49.842797 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be10507-f755-4ddf-8a9c-4699573ac179-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.173887 4984 generic.go:334] "Generic (PLEG): container finished" podID="7be10507-f755-4ddf-8a9c-4699573ac179" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" exitCode=0 Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.173971 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965"} Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.173983 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5j2l" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.174005 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5j2l" event={"ID":"7be10507-f755-4ddf-8a9c-4699573ac179","Type":"ContainerDied","Data":"a1e87647fa48ca5a25ffd87302b4d82f9faa355a900d689f8b5ed3691791996c"} Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.174055 4984 scope.go:117] "RemoveContainer" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.204573 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.211104 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5j2l"] Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.222042 4984 scope.go:117] "RemoveContainer" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.244531 4984 scope.go:117] "RemoveContainer" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.270856 4984 scope.go:117] "RemoveContainer" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" Jan 30 10:26:50 crc kubenswrapper[4984]: E0130 10:26:50.272109 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965\": container with ID starting with 8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965 not found: ID does not exist" containerID="8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.272157 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965"} err="failed to get container status \"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965\": rpc error: code = NotFound desc = could not find container \"8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965\": container with ID starting with 8bca0ad8967f6712974d676f5ae84409da7cb40f77139343a032eb29e0f05965 not found: ID does not exist" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.272184 4984 scope.go:117] "RemoveContainer" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" Jan 30 10:26:50 crc kubenswrapper[4984]: E0130 10:26:50.273078 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51\": container with ID starting with d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51 not found: ID does not exist" containerID="d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.273127 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51"} err="failed to get container status \"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51\": rpc error: code = NotFound desc = could not find container \"d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51\": container with ID starting with d4c3c48c284b4978f3fc4130412adb38a38b8e26c33590c5786ca3d0b9476d51 not found: ID does not exist" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.273158 4984 scope.go:117] "RemoveContainer" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" Jan 30 10:26:50 crc kubenswrapper[4984]: E0130 10:26:50.273708 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184\": container with ID starting with e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184 not found: ID does not exist" containerID="e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184" Jan 30 10:26:50 crc kubenswrapper[4984]: I0130 10:26:50.273839 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184"} err="failed to get container status \"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184\": rpc error: code = NotFound desc = could not find container \"e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184\": container with ID starting with e3f3fb3c1fcb137cb9b87fd9f0b9d74f9da9ef9dadd004d084c6fdd052e1b184 not found: ID does not exist" Jan 30 10:26:52 crc kubenswrapper[4984]: I0130 10:26:52.096946 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" path="/var/lib/kubelet/pods/7be10507-f755-4ddf-8a9c-4699573ac179/volumes" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.366373 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367111 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367127 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367142 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367150 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367168 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367177 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367188 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367195 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-content" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367205 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367213 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: E0130 10:26:58.367227 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367235 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="extract-utilities" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367371 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be10507-f755-4ddf-8a9c-4699573ac179" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.367389 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9468373-fc0f-4d5e-85c0-4d09686d9b9a" containerName="registry-server" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.375746 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.383798 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.498135 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.498931 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.498979 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.600485 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.600595 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.600626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.601039 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.601135 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.621692 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"certified-operators-7g2fz\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:58 crc kubenswrapper[4984]: I0130 10:26:58.695368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:26:59 crc kubenswrapper[4984]: I0130 10:26:59.161515 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:26:59 crc kubenswrapper[4984]: I0130 10:26:59.240782 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerStarted","Data":"563a7e524d30c4803c51ba214f5d762c3f2083539dc1784f4a771db1f2668c0c"} Jan 30 10:27:00 crc kubenswrapper[4984]: I0130 10:27:00.249593 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" exitCode=0 Jan 30 10:27:00 crc kubenswrapper[4984]: I0130 10:27:00.251128 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d"} Jan 30 10:27:01 crc kubenswrapper[4984]: I0130 10:27:01.259630 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerStarted","Data":"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371"} Jan 30 10:27:02 crc kubenswrapper[4984]: I0130 10:27:02.271641 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" exitCode=0 Jan 30 10:27:02 crc kubenswrapper[4984]: I0130 10:27:02.271723 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371"} Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.000475 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.000877 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.280032 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerStarted","Data":"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596"} Jan 30 10:27:03 crc kubenswrapper[4984]: I0130 10:27:03.308926 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7g2fz" podStartSLOduration=2.835687994 podStartE2EDuration="5.30891374s" podCreationTimestamp="2026-01-30 10:26:58 +0000 UTC" firstStartedPulling="2026-01-30 10:27:00.253698344 +0000 UTC m=+924.820002168" lastFinishedPulling="2026-01-30 10:27:02.72692409 +0000 UTC m=+927.293227914" observedRunningTime="2026-01-30 10:27:03.307306838 +0000 UTC m=+927.873610662" watchObservedRunningTime="2026-01-30 10:27:03.30891374 +0000 UTC m=+927.875217564" Jan 30 10:27:08 crc kubenswrapper[4984]: I0130 10:27:08.696341 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:08 crc kubenswrapper[4984]: I0130 10:27:08.696894 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:08 crc kubenswrapper[4984]: I0130 10:27:08.737503 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:09 crc kubenswrapper[4984]: I0130 10:27:09.366307 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:09 crc kubenswrapper[4984]: I0130 10:27:09.410336 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.334854 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7g2fz" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" containerID="cri-o://20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" gracePeriod=2 Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.743651 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.796651 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") pod \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.796971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") pod \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.797096 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") pod \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\" (UID: \"3a5c8b58-3853-49c9-8d03-c6dd4528b75c\") " Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.797701 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities" (OuterVolumeSpecName: "utilities") pod "3a5c8b58-3853-49c9-8d03-c6dd4528b75c" (UID: "3a5c8b58-3853-49c9-8d03-c6dd4528b75c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.805628 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b" (OuterVolumeSpecName: "kube-api-access-szv5b") pod "3a5c8b58-3853-49c9-8d03-c6dd4528b75c" (UID: "3a5c8b58-3853-49c9-8d03-c6dd4528b75c"). InnerVolumeSpecName "kube-api-access-szv5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.898561 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:27:11 crc kubenswrapper[4984]: I0130 10:27:11.898592 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szv5b\" (UniqueName: \"kubernetes.io/projected/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-kube-api-access-szv5b\") on node \"crc\" DevicePath \"\"" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.162044 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a5c8b58-3853-49c9-8d03-c6dd4528b75c" (UID: "3a5c8b58-3853-49c9-8d03-c6dd4528b75c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.201390 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c8b58-3853-49c9-8d03-c6dd4528b75c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362336 4984 generic.go:334] "Generic (PLEG): container finished" podID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" exitCode=0 Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362394 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596"} Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362434 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g2fz" event={"ID":"3a5c8b58-3853-49c9-8d03-c6dd4528b75c","Type":"ContainerDied","Data":"563a7e524d30c4803c51ba214f5d762c3f2083539dc1784f4a771db1f2668c0c"} Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362457 4984 scope.go:117] "RemoveContainer" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.362596 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g2fz" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.390161 4984 scope.go:117] "RemoveContainer" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.403321 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.407398 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7g2fz"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.411475 4984 scope.go:117] "RemoveContainer" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.424778 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj"] Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.425042 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-utilities" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425059 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-utilities" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.425079 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425087 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.425098 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-content" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425104 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="extract-content" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425229 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" containerName="registry-server" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.425832 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.429037 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k72mx" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.437790 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.442771 4984 scope.go:117] "RemoveContainer" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.444730 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596\": container with ID starting with 20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596 not found: ID does not exist" containerID="20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.444803 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596"} err="failed to get container status \"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596\": rpc error: code = NotFound desc = could not find container \"20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596\": container with ID starting with 20c45cde9eaa3715a6817283d4a0d0a85b142327b27e2833250f8bfb12317596 not found: ID does not exist" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.444846 4984 scope.go:117] "RemoveContainer" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.445546 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371\": container with ID starting with dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371 not found: ID does not exist" containerID="dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.445582 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371"} err="failed to get container status \"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371\": rpc error: code = NotFound desc = could not find container \"dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371\": container with ID starting with dac0c3ea389bf7f3a0a3d188ba7f4367dc508b3d40b44827db9d0cbace2fa371 not found: ID does not exist" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.445604 4984 scope.go:117] "RemoveContainer" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.446150 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d\": container with ID starting with b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d not found: ID does not exist" containerID="b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.446206 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d"} err="failed to get container status \"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d\": rpc error: code = NotFound desc = could not find container \"b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d\": container with ID starting with b2839bbf05dc4c9f022ddea893d6b80078a8d1f5962a8587c6cbf114fd724d0d not found: ID does not exist" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.459088 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.462620 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.469307 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nn86j" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.473337 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.474956 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.477788 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g5sl5" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.480767 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.481584 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.485721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lr58g" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.493770 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.500607 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508048 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phl4z\" (UniqueName: \"kubernetes.io/projected/254d2d7e-3636-429d-b043-501d76db73e9-kube-api-access-phl4z\") pod \"glance-operator-controller-manager-8886f4c47-tjfpn\" (UID: \"254d2d7e-3636-429d-b043-501d76db73e9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508102 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pj6f\" (UniqueName: \"kubernetes.io/projected/5d977367-099f-4a10-bf37-9e9cd913932e-kube-api-access-2pj6f\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sxpfj\" (UID: \"5d977367-099f-4a10-bf37-9e9cd913932e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508178 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnxq\" (UniqueName: \"kubernetes.io/projected/8c70fc0b-a348-4dcd-8fc3-9afa1c22318e-kube-api-access-hqnxq\") pod \"designate-operator-controller-manager-6d9697b7f4-b674n\" (UID: \"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.508197 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92k7t\" (UniqueName: \"kubernetes.io/projected/74bafe89-dc08-4029-823c-f0c3579b8d6b-kube-api-access-92k7t\") pod \"cinder-operator-controller-manager-8d874c8fc-cnxbk\" (UID: \"74bafe89-dc08-4029-823c-f0c3579b8d6b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.521780 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.527803 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.528766 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.532017 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-77kll" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.547527 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.551522 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.552374 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.561487 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5j55"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.562210 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.565632 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-j2m86" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.565843 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.566475 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-thdrj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.570022 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.571051 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.574048 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.580711 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kjwsf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.584072 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5j55"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.602535 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617687 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cl5j\" (UniqueName: \"kubernetes.io/projected/5e7c3856-3562-4cb4-b131-48302c43ce25-kube-api-access-5cl5j\") pod \"heat-operator-controller-manager-69d6db494d-zl2fj\" (UID: \"5e7c3856-3562-4cb4-b131-48302c43ce25\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnxq\" (UniqueName: \"kubernetes.io/projected/8c70fc0b-a348-4dcd-8fc3-9afa1c22318e-kube-api-access-hqnxq\") pod \"designate-operator-controller-manager-6d9697b7f4-b674n\" (UID: \"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617776 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92k7t\" (UniqueName: \"kubernetes.io/projected/74bafe89-dc08-4029-823c-f0c3579b8d6b-kube-api-access-92k7t\") pod \"cinder-operator-controller-manager-8d874c8fc-cnxbk\" (UID: \"74bafe89-dc08-4029-823c-f0c3579b8d6b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617804 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617827 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzfx\" (UniqueName: \"kubernetes.io/projected/3899fe05-64bb-48b9-88dc-2341ad9bc00b-kube-api-access-9xzfx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-8hrrf\" (UID: \"3899fe05-64bb-48b9-88dc-2341ad9bc00b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617852 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phl4z\" (UniqueName: \"kubernetes.io/projected/254d2d7e-3636-429d-b043-501d76db73e9-kube-api-access-phl4z\") pod \"glance-operator-controller-manager-8886f4c47-tjfpn\" (UID: \"254d2d7e-3636-429d-b043-501d76db73e9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617878 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pj6f\" (UniqueName: \"kubernetes.io/projected/5d977367-099f-4a10-bf37-9e9cd913932e-kube-api-access-2pj6f\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sxpfj\" (UID: \"5d977367-099f-4a10-bf37-9e9cd913932e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617908 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rrq\" (UniqueName: \"kubernetes.io/projected/e420c57f-7248-4454-926f-48766e48236c-kube-api-access-b4rrq\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.617956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whhkr\" (UniqueName: \"kubernetes.io/projected/7a6dd1f5-d0b6-49a6-9270-dd98f2147932-kube-api-access-whhkr\") pod \"horizon-operator-controller-manager-5fb775575f-zzd6d\" (UID: \"7a6dd1f5-d0b6-49a6-9270-dd98f2147932\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.664989 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.667146 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.669907 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnxq\" (UniqueName: \"kubernetes.io/projected/8c70fc0b-a348-4dcd-8fc3-9afa1c22318e-kube-api-access-hqnxq\") pod \"designate-operator-controller-manager-6d9697b7f4-b674n\" (UID: \"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.676282 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pj6f\" (UniqueName: \"kubernetes.io/projected/5d977367-099f-4a10-bf37-9e9cd913932e-kube-api-access-2pj6f\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-sxpfj\" (UID: \"5d977367-099f-4a10-bf37-9e9cd913932e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.678437 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rtfnj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.689660 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phl4z\" (UniqueName: \"kubernetes.io/projected/254d2d7e-3636-429d-b043-501d76db73e9-kube-api-access-phl4z\") pod \"glance-operator-controller-manager-8886f4c47-tjfpn\" (UID: \"254d2d7e-3636-429d-b043-501d76db73e9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.711960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92k7t\" (UniqueName: \"kubernetes.io/projected/74bafe89-dc08-4029-823c-f0c3579b8d6b-kube-api-access-92k7t\") pod \"cinder-operator-controller-manager-8d874c8fc-cnxbk\" (UID: \"74bafe89-dc08-4029-823c-f0c3579b8d6b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.715954 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.716973 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.721044 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mv96k" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whhkr\" (UniqueName: \"kubernetes.io/projected/7a6dd1f5-d0b6-49a6-9270-dd98f2147932-kube-api-access-whhkr\") pod \"horizon-operator-controller-manager-5fb775575f-zzd6d\" (UID: \"7a6dd1f5-d0b6-49a6-9270-dd98f2147932\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725472 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cl5j\" (UniqueName: \"kubernetes.io/projected/5e7c3856-3562-4cb4-b131-48302c43ce25-kube-api-access-5cl5j\") pod \"heat-operator-controller-manager-69d6db494d-zl2fj\" (UID: \"5e7c3856-3562-4cb4-b131-48302c43ce25\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725544 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xzfx\" (UniqueName: \"kubernetes.io/projected/3899fe05-64bb-48b9-88dc-2341ad9bc00b-kube-api-access-9xzfx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-8hrrf\" (UID: \"3899fe05-64bb-48b9-88dc-2341ad9bc00b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725586 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nc6\" (UniqueName: \"kubernetes.io/projected/dd895dbf-b809-498c-95fd-dfd09a9eeb4d-kube-api-access-b9nc6\") pod \"keystone-operator-controller-manager-84f48565d4-zwc2t\" (UID: \"dd895dbf-b809-498c-95fd-dfd09a9eeb4d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.725613 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rrq\" (UniqueName: \"kubernetes.io/projected/e420c57f-7248-4454-926f-48766e48236c-kube-api-access-b4rrq\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.726500 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:12 crc kubenswrapper[4984]: E0130 10:27:12.726559 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:13.226537813 +0000 UTC m=+937.792841637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.732002 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.744772 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.747241 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cl5j\" (UniqueName: \"kubernetes.io/projected/5e7c3856-3562-4cb4-b131-48302c43ce25-kube-api-access-5cl5j\") pod \"heat-operator-controller-manager-69d6db494d-zl2fj\" (UID: \"5e7c3856-3562-4cb4-b131-48302c43ce25\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.748796 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.753489 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whhkr\" (UniqueName: \"kubernetes.io/projected/7a6dd1f5-d0b6-49a6-9270-dd98f2147932-kube-api-access-whhkr\") pod \"horizon-operator-controller-manager-5fb775575f-zzd6d\" (UID: \"7a6dd1f5-d0b6-49a6-9270-dd98f2147932\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.762970 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rrq\" (UniqueName: \"kubernetes.io/projected/e420c57f-7248-4454-926f-48766e48236c-kube-api-access-b4rrq\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.773768 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.774464 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.774809 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.776231 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-x6pjc" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.777852 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-srv7f" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.784222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xzfx\" (UniqueName: \"kubernetes.io/projected/3899fe05-64bb-48b9-88dc-2341ad9bc00b-kube-api-access-9xzfx\") pod \"ironic-operator-controller-manager-5f4b8bd54d-8hrrf\" (UID: \"3899fe05-64bb-48b9-88dc-2341ad9bc00b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.784815 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.813599 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832133 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832906 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfhqb\" (UniqueName: \"kubernetes.io/projected/1d30b9a6-fe73-4e32-9095-65b1950f7afe-kube-api-access-gfhqb\") pod \"neutron-operator-controller-manager-585dbc889-2tbcn\" (UID: \"1d30b9a6-fe73-4e32-9095-65b1950f7afe\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nc6\" (UniqueName: \"kubernetes.io/projected/dd895dbf-b809-498c-95fd-dfd09a9eeb4d-kube-api-access-b9nc6\") pod \"keystone-operator-controller-manager-84f48565d4-zwc2t\" (UID: \"dd895dbf-b809-498c-95fd-dfd09a9eeb4d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bh5d\" (UniqueName: \"kubernetes.io/projected/67a8ae49-7f19-47bc-8e54-0873c535f6ff-kube-api-access-8bh5d\") pod \"mariadb-operator-controller-manager-67bf948998-t75dn\" (UID: \"67a8ae49-7f19-47bc-8e54-0873c535f6ff\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.832997 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62n78\" (UniqueName: \"kubernetes.io/projected/739ed1d4-c090-4166-9352-d048e0b281d6-kube-api-access-62n78\") pod \"manila-operator-controller-manager-7dd968899f-2wvrh\" (UID: \"739ed1d4-c090-4166-9352-d048e0b281d6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.856933 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.858033 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nc6\" (UniqueName: \"kubernetes.io/projected/dd895dbf-b809-498c-95fd-dfd09a9eeb4d-kube-api-access-b9nc6\") pod \"keystone-operator-controller-manager-84f48565d4-zwc2t\" (UID: \"dd895dbf-b809-498c-95fd-dfd09a9eeb4d\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.849945 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.872390 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.888862 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.909520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.919076 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.919562 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.924179 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.932293 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mzqbr" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.934188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfhqb\" (UniqueName: \"kubernetes.io/projected/1d30b9a6-fe73-4e32-9095-65b1950f7afe-kube-api-access-gfhqb\") pod \"neutron-operator-controller-manager-585dbc889-2tbcn\" (UID: \"1d30b9a6-fe73-4e32-9095-65b1950f7afe\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.934257 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bh5d\" (UniqueName: \"kubernetes.io/projected/67a8ae49-7f19-47bc-8e54-0873c535f6ff-kube-api-access-8bh5d\") pod \"mariadb-operator-controller-manager-67bf948998-t75dn\" (UID: \"67a8ae49-7f19-47bc-8e54-0873c535f6ff\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.934293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62n78\" (UniqueName: \"kubernetes.io/projected/739ed1d4-c090-4166-9352-d048e0b281d6-kube-api-access-62n78\") pod \"manila-operator-controller-manager-7dd968899f-2wvrh\" (UID: \"739ed1d4-c090-4166-9352-d048e0b281d6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.954719 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bh5d\" (UniqueName: \"kubernetes.io/projected/67a8ae49-7f19-47bc-8e54-0873c535f6ff-kube-api-access-8bh5d\") pod \"mariadb-operator-controller-manager-67bf948998-t75dn\" (UID: \"67a8ae49-7f19-47bc-8e54-0873c535f6ff\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.956055 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62n78\" (UniqueName: \"kubernetes.io/projected/739ed1d4-c090-4166-9352-d048e0b281d6-kube-api-access-62n78\") pod \"manila-operator-controller-manager-7dd968899f-2wvrh\" (UID: \"739ed1d4-c090-4166-9352-d048e0b281d6\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.957515 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.960025 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfhqb\" (UniqueName: \"kubernetes.io/projected/1d30b9a6-fe73-4e32-9095-65b1950f7afe-kube-api-access-gfhqb\") pod \"neutron-operator-controller-manager-585dbc889-2tbcn\" (UID: \"1d30b9a6-fe73-4e32-9095-65b1950f7afe\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.965280 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp"] Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.966856 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:12 crc kubenswrapper[4984]: I0130 10:27:12.971809 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5b5d7" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.003525 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.016054 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.017807 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.023929 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g5pdg" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.034369 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.034987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmtm\" (UniqueName: \"kubernetes.io/projected/c6ee91ae-9b91-46a7-ad2a-c67133a4f40e-kube-api-access-zpmtm\") pod \"octavia-operator-controller-manager-6687f8d877-sh7cp\" (UID: \"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.035039 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnglj\" (UniqueName: \"kubernetes.io/projected/ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1-kube-api-access-wnglj\") pod \"nova-operator-controller-manager-55bff696bd-gcbx5\" (UID: \"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.035059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsr5s\" (UniqueName: \"kubernetes.io/projected/bb50c219-6036-48d0-8568-0a1601150272-kube-api-access-tsr5s\") pod \"ovn-operator-controller-manager-788c46999f-28kkh\" (UID: \"bb50c219-6036-48d0-8568-0a1601150272\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.075060 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.076358 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.083758 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.084237 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hd2cj" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.084746 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.087742 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.088064 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rq8h4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.089061 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.090310 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.091696 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jrdtd" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.097948 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.116699 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.126668 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139296 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsr5s\" (UniqueName: \"kubernetes.io/projected/bb50c219-6036-48d0-8568-0a1601150272-kube-api-access-tsr5s\") pod \"ovn-operator-controller-manager-788c46999f-28kkh\" (UID: \"bb50c219-6036-48d0-8568-0a1601150272\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139396 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xl2h\" (UniqueName: \"kubernetes.io/projected/8d22f0a7-a541-405b-8146-fb098d02ddcc-kube-api-access-6xl2h\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139429 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjvn\" (UniqueName: \"kubernetes.io/projected/c3eec896-3441-4b0e-a7e5-4bde717dbccd-kube-api-access-7fjvn\") pod \"swift-operator-controller-manager-68fc8c869-jvcvp\" (UID: \"c3eec896-3441-4b0e-a7e5-4bde717dbccd\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139484 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvzn\" (UniqueName: \"kubernetes.io/projected/69e058b7-deda-4eb8-9cac-6bc08032b3bf-kube-api-access-xpvzn\") pod \"placement-operator-controller-manager-5b964cf4cd-fx6t9\" (UID: \"69e058b7-deda-4eb8-9cac-6bc08032b3bf\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139523 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139602 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpmtm\" (UniqueName: \"kubernetes.io/projected/c6ee91ae-9b91-46a7-ad2a-c67133a4f40e-kube-api-access-zpmtm\") pod \"octavia-operator-controller-manager-6687f8d877-sh7cp\" (UID: \"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.139677 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnglj\" (UniqueName: \"kubernetes.io/projected/ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1-kube-api-access-wnglj\") pod \"nova-operator-controller-manager-55bff696bd-gcbx5\" (UID: \"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.144197 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.145851 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.171478 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpmtm\" (UniqueName: \"kubernetes.io/projected/c6ee91ae-9b91-46a7-ad2a-c67133a4f40e-kube-api-access-zpmtm\") pod \"octavia-operator-controller-manager-6687f8d877-sh7cp\" (UID: \"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.174866 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnglj\" (UniqueName: \"kubernetes.io/projected/ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1-kube-api-access-wnglj\") pod \"nova-operator-controller-manager-55bff696bd-gcbx5\" (UID: \"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.180844 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsr5s\" (UniqueName: \"kubernetes.io/projected/bb50c219-6036-48d0-8568-0a1601150272-kube-api-access-tsr5s\") pod \"ovn-operator-controller-manager-788c46999f-28kkh\" (UID: \"bb50c219-6036-48d0-8568-0a1601150272\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.188880 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.189347 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.204497 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.205631 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.209117 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jnlqs" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.215030 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.218923 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.227209 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.227922 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.242970 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkmp\" (UniqueName: \"kubernetes.io/projected/df5d4f32-b49b-46ea-8aac-a3b76b2f8f00-kube-api-access-qdkmp\") pod \"telemetry-operator-controller-manager-64b5b76f97-r7hs4\" (UID: \"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.243040 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246316 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xl2h\" (UniqueName: \"kubernetes.io/projected/8d22f0a7-a541-405b-8146-fb098d02ddcc-kube-api-access-6xl2h\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246366 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjvn\" (UniqueName: \"kubernetes.io/projected/c3eec896-3441-4b0e-a7e5-4bde717dbccd-kube-api-access-7fjvn\") pod \"swift-operator-controller-manager-68fc8c869-jvcvp\" (UID: \"c3eec896-3441-4b0e-a7e5-4bde717dbccd\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246399 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvzn\" (UniqueName: \"kubernetes.io/projected/69e058b7-deda-4eb8-9cac-6bc08032b3bf-kube-api-access-xpvzn\") pod \"placement-operator-controller-manager-5b964cf4cd-fx6t9\" (UID: \"69e058b7-deda-4eb8-9cac-6bc08032b3bf\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.246462 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.249617 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-h7pcb"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.250553 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.252206 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.252325 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:13.752297714 +0000 UTC m=+938.318601538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.253543 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.253661 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.253724 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.253700311 +0000 UTC m=+938.820004135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.256731 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-j4mt6" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.262466 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rdflf" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.280633 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-h7pcb"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.291934 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvzn\" (UniqueName: \"kubernetes.io/projected/69e058b7-deda-4eb8-9cac-6bc08032b3bf-kube-api-access-xpvzn\") pod \"placement-operator-controller-manager-5b964cf4cd-fx6t9\" (UID: \"69e058b7-deda-4eb8-9cac-6bc08032b3bf\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.291968 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.322626 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjvn\" (UniqueName: \"kubernetes.io/projected/c3eec896-3441-4b0e-a7e5-4bde717dbccd-kube-api-access-7fjvn\") pod \"swift-operator-controller-manager-68fc8c869-jvcvp\" (UID: \"c3eec896-3441-4b0e-a7e5-4bde717dbccd\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.339371 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xl2h\" (UniqueName: \"kubernetes.io/projected/8d22f0a7-a541-405b-8146-fb098d02ddcc-kube-api-access-6xl2h\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzn2\" (UniqueName: \"kubernetes.io/projected/9a53674a-07ad-4bfc-80c8-f55bcc286eb0-kube-api-access-klzn2\") pod \"watcher-operator-controller-manager-564965969-h7pcb\" (UID: \"9a53674a-07ad-4bfc-80c8-f55bcc286eb0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357356 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdkmp\" (UniqueName: \"kubernetes.io/projected/df5d4f32-b49b-46ea-8aac-a3b76b2f8f00-kube-api-access-qdkmp\") pod \"telemetry-operator-controller-manager-64b5b76f97-r7hs4\" (UID: \"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrb2l\" (UniqueName: \"kubernetes.io/projected/350834d1-9352-4ca5-9c8a-acf60193ebc8-kube-api-access-xrb2l\") pod \"test-operator-controller-manager-56f8bfcd9f-4lz58\" (UID: \"350834d1-9352-4ca5-9c8a-acf60193ebc8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.357657 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.390431 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.391123 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdkmp\" (UniqueName: \"kubernetes.io/projected/df5d4f32-b49b-46ea-8aac-a3b76b2f8f00-kube-api-access-qdkmp\") pod \"telemetry-operator-controller-manager-64b5b76f97-r7hs4\" (UID: \"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.391652 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.394196 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.398992 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.401001 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mg6lk" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.410355 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.421534 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.448347 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.459961 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrb2l\" (UniqueName: \"kubernetes.io/projected/350834d1-9352-4ca5-9c8a-acf60193ebc8-kube-api-access-xrb2l\") pod \"test-operator-controller-manager-56f8bfcd9f-4lz58\" (UID: \"350834d1-9352-4ca5-9c8a-acf60193ebc8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460040 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9849\" (UniqueName: \"kubernetes.io/projected/87613c07-d864-4440-b31c-03c4bb3f8ce0-kube-api-access-v9849\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460103 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460169 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klzn2\" (UniqueName: \"kubernetes.io/projected/9a53674a-07ad-4bfc-80c8-f55bcc286eb0-kube-api-access-klzn2\") pod \"watcher-operator-controller-manager-564965969-h7pcb\" (UID: \"9a53674a-07ad-4bfc-80c8-f55bcc286eb0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.460205 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.465128 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.466211 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.469467 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mll9d" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.471517 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.499501 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.500603 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrb2l\" (UniqueName: \"kubernetes.io/projected/350834d1-9352-4ca5-9c8a-acf60193ebc8-kube-api-access-xrb2l\") pod \"test-operator-controller-manager-56f8bfcd9f-4lz58\" (UID: \"350834d1-9352-4ca5-9c8a-acf60193ebc8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.501182 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klzn2\" (UniqueName: \"kubernetes.io/projected/9a53674a-07ad-4bfc-80c8-f55bcc286eb0-kube-api-access-klzn2\") pod \"watcher-operator-controller-manager-564965969-h7pcb\" (UID: \"9a53674a-07ad-4bfc-80c8-f55bcc286eb0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.509138 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.527998 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9849\" (UniqueName: \"kubernetes.io/projected/87613c07-d864-4440-b31c-03c4bb3f8ce0-kube-api-access-v9849\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581723 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581781 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.581807 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbfm\" (UniqueName: \"kubernetes.io/projected/e8bf6651-ff58-478c-be28-39732dac675b-kube-api-access-hdbfm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vpt86\" (UID: \"e8bf6651-ff58-478c-be28-39732dac675b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.582177 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.582700 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.582776 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.082756753 +0000 UTC m=+938.649060567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.584932 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.584974 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.084965061 +0000 UTC m=+938.651268885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.606508 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d"] Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.612694 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.619405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9849\" (UniqueName: \"kubernetes.io/projected/87613c07-d864-4440-b31c-03c4bb3f8ce0-kube-api-access-v9849\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.626073 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.654509 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6dd1f5_d0b6_49a6_9270_dd98f2147932.slice/crio-afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd WatchSource:0}: Error finding container afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd: Status 404 returned error can't find the container with id afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.654803 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.672491 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod254d2d7e_3636_429d_b043_501d76db73e9.slice/crio-4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285 WatchSource:0}: Error finding container 4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285: Status 404 returned error can't find the container with id 4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.693419 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdbfm\" (UniqueName: \"kubernetes.io/projected/e8bf6651-ff58-478c-be28-39732dac675b-kube-api-access-hdbfm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vpt86\" (UID: \"e8bf6651-ff58-478c-be28-39732dac675b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.716977 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdbfm\" (UniqueName: \"kubernetes.io/projected/e8bf6651-ff58-478c-be28-39732dac675b-kube-api-access-hdbfm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vpt86\" (UID: \"e8bf6651-ff58-478c-be28-39732dac675b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.774784 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.792612 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7c3856_3562_4cb4_b131_48302c43ce25.slice/crio-de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54 WatchSource:0}: Error finding container de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54: Status 404 returned error can't find the container with id de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.794743 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.795822 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: E0130 10:27:13.795875 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:14.795860633 +0000 UTC m=+939.362164457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.926574 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.927336 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3899fe05_64bb_48b9_88dc_2341ad9bc00b.slice/crio-5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268 WatchSource:0}: Error finding container 5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268: Status 404 returned error can't find the container with id 5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.932154 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.941182 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd895dbf_b809_498c_95fd_dfd09a9eeb4d.slice/crio-0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e WatchSource:0}: Error finding container 0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e: Status 404 returned error can't find the container with id 0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.942595 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh"] Jan 30 10:27:13 crc kubenswrapper[4984]: W0130 10:27:13.945731 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739ed1d4_c090_4166_9352_d048e0b281d6.slice/crio-6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5 WatchSource:0}: Error finding container 6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5: Status 404 returned error can't find the container with id 6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5 Jan 30 10:27:13 crc kubenswrapper[4984]: I0130 10:27:13.980382 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.067539 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.079296 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.100367 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.100481 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100640 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100642 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100705 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:15.100685757 +0000 UTC m=+939.666989571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.100725 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:15.100716738 +0000 UTC m=+939.667020562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.103712 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5c8b58-3853-49c9-8d03-c6dd4528b75c" path="/var/lib/kubelet/pods/3a5c8b58-3853-49c9-8d03-c6dd4528b75c/volumes" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.154335 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.162841 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e058b7_deda_4eb8_9cac_6bc08032b3bf.slice/crio-1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431 WatchSource:0}: Error finding container 1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431: Status 404 returned error can't find the container with id 1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431 Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.170507 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab1e50a1_4d8f_45f4_8fa0_fd4732dce6f1.slice/crio-1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269 WatchSource:0}: Error finding container 1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269: Status 404 returned error can't find the container with id 1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269 Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.171808 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5"] Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.172064 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-r7hs4_openstack-operators(df5d4f32-b49b-46ea-8aac-a3b76b2f8f00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.173462 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.178293 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-gcbx5_openstack-operators(ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.179591 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podUID="ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.194536 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.207740 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.259485 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.269734 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb50c219_6036_48d0_8568_0a1601150272.slice/crio-7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26 WatchSource:0}: Error finding container 7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26: Status 404 returned error can't find the container with id 7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26 Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.274915 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-28kkh_openstack-operators(bb50c219-6036-48d0-8568-0a1601150272): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.276383 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.276471 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp"] Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.287144 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fjvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-jvcvp_openstack-operators(c3eec896-3441-4b0e-a7e5-4bde717dbccd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.288384 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.303645 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.303943 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.304076 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:16.30404127 +0000 UTC m=+940.870345104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.332223 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-h7pcb"] Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.349228 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.355153 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod350834d1_9352_4ca5_9c8a_acf60193ebc8.slice/crio-f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46 WatchSource:0}: Error finding container f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46: Status 404 returned error can't find the container with id f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46 Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.358061 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrb2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-4lz58_openstack-operators(350834d1-9352-4ca5-9c8a-acf60193ebc8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.359608 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.425215 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" event={"ID":"3899fe05-64bb-48b9-88dc-2341ad9bc00b","Type":"ContainerStarted","Data":"5b7b149a805a7b765650e9a705bbd0681806f910051df6e283acfbb0fc0f2268"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.427952 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" event={"ID":"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e","Type":"ContainerStarted","Data":"7423e3dcebb790545f0305b3f9fce0f7d23e4956c37b8826269846751a8d0f34"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.429605 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" event={"ID":"69e058b7-deda-4eb8-9cac-6bc08032b3bf","Type":"ContainerStarted","Data":"1ae82204a0c2274c0659af2e5428267d371f1879458bba6a0edf0f25a00af431"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.432116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" event={"ID":"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00","Type":"ContainerStarted","Data":"0328eaa0d4243a3502dd840b6a1d42887c4794480293571aa7c7779794e3d6aa"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.433811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" event={"ID":"bb50c219-6036-48d0-8568-0a1601150272","Type":"ContainerStarted","Data":"7da0bb9eeb3d855bd92746b9539116f8779ecdba706b903a2676618cf4c68f26"} Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.434093 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.435087 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.436973 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" event={"ID":"dd895dbf-b809-498c-95fd-dfd09a9eeb4d","Type":"ContainerStarted","Data":"0134ef08ebdc6ac48a0422f44b54e090e8f450fd82a29f23e9d1a2f6dcaca37e"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.438746 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" event={"ID":"1d30b9a6-fe73-4e32-9095-65b1950f7afe","Type":"ContainerStarted","Data":"3a270c1d93a85e5ffa09b35390f9099aba6ad921145527be4b582cc162e90c24"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.440750 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" event={"ID":"c3eec896-3441-4b0e-a7e5-4bde717dbccd","Type":"ContainerStarted","Data":"aac7f455d92c29349c242a510c3b523c81a6ab68e0f28304c608cdadfe1dd218"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.443009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" event={"ID":"350834d1-9352-4ca5-9c8a-acf60193ebc8","Type":"ContainerStarted","Data":"f18d7fdd58418ea7305af9f739af224561ccf2bd09247130a57c6c12ae209e46"} Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.476346 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.476465 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.485740 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" event={"ID":"67a8ae49-7f19-47bc-8e54-0873c535f6ff","Type":"ContainerStarted","Data":"dab369cb1a7120cc3279cb7b90b356e47e16203fe21c24eb8b79a975ce625f60"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.498356 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" event={"ID":"5e7c3856-3562-4cb4-b131-48302c43ce25","Type":"ContainerStarted","Data":"de7187d29402eb745534cad84336c5ef26b946b1ab9a9d5689042dc79fd8fe54"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.501130 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" event={"ID":"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1","Type":"ContainerStarted","Data":"1bf51e6ce804078573b612626123bf81dbd82c08fac6d6794dd04ba2424c2269"} Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.503450 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podUID="ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1" Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.504680 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" event={"ID":"9a53674a-07ad-4bfc-80c8-f55bcc286eb0","Type":"ContainerStarted","Data":"a1c399dc6bbc4631454bc3a44e5698e330ef80e4f6107241ed8b9d628a35188e"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.511360 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" event={"ID":"5d977367-099f-4a10-bf37-9e9cd913932e","Type":"ContainerStarted","Data":"ed1ca11823afdb5f0a64cad0469be57d3366ca9a39346199f448b10aaf6e80d0"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.538380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" event={"ID":"7a6dd1f5-d0b6-49a6-9270-dd98f2147932","Type":"ContainerStarted","Data":"afb6615c30e4a88eb2fe81d75326a416d399ffddf8946b4f560a511336817ccd"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.542787 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" event={"ID":"254d2d7e-3636-429d-b043-501d76db73e9","Type":"ContainerStarted","Data":"4ea354640a1254f7c8966a71053d42d6e7a9a595a2499d4fdea9100d0c8ab285"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.547671 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86"] Jan 30 10:27:14 crc kubenswrapper[4984]: W0130 10:27:14.557385 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8bf6651_ff58_478c_be28_39732dac675b.slice/crio-5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581 WatchSource:0}: Error finding container 5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581: Status 404 returned error can't find the container with id 5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581 Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.557513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" event={"ID":"739ed1d4-c090-4166-9352-d048e0b281d6","Type":"ContainerStarted","Data":"6f4ec9e30594b28e56a43fdf97b04497856213fa8e268d40161d49bd4efc84c5"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.562380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" event={"ID":"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e","Type":"ContainerStarted","Data":"74523d9f27d8971efd2b6c156ee39a713cba2c5b665f111da13bc21f1a9ba5c4"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.567853 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" event={"ID":"74bafe89-dc08-4029-823c-f0c3579b8d6b","Type":"ContainerStarted","Data":"90b6e72ae1c9b65557d4a0f35e9dd37eb39fad85709630ccccb606d8afd43cb0"} Jan 30 10:27:14 crc kubenswrapper[4984]: I0130 10:27:14.817758 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.818009 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:14 crc kubenswrapper[4984]: E0130 10:27:14.818135 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:16.818106792 +0000 UTC m=+941.384410616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: I0130 10:27:15.124299 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:15 crc kubenswrapper[4984]: I0130 10:27:15.124493 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.124810 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.124906 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:17.124871717 +0000 UTC m=+941.691175541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.125758 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.125811 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:17.125799462 +0000 UTC m=+941.692103286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:15 crc kubenswrapper[4984]: I0130 10:27:15.582974 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" event={"ID":"e8bf6651-ff58-478c-be28-39732dac675b","Type":"ContainerStarted","Data":"5fc25a276ffe874b3674e2a9f29ca563e56bf0e282ac65486355543e07033581"} Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.585677 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.585996 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.586116 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podUID="ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.586152 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:15 crc kubenswrapper[4984]: E0130 10:27:15.594714 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:16 crc kubenswrapper[4984]: I0130 10:27:16.353565 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.353999 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.354263 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:20.35423403 +0000 UTC m=+944.920537854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:16 crc kubenswrapper[4984]: I0130 10:27:16.861796 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.862019 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:16 crc kubenswrapper[4984]: E0130 10:27:16.862066 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:20.862050368 +0000 UTC m=+945.428354192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: I0130 10:27:17.165950 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:17 crc kubenswrapper[4984]: I0130 10:27:17.166086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.166990 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.167078 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:21.167037937 +0000 UTC m=+945.733341851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.167418 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:17 crc kubenswrapper[4984]: E0130 10:27:17.167482 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:21.167471598 +0000 UTC m=+945.733775422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: I0130 10:27:20.417528 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.417711 4984 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.417977 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert podName:e420c57f-7248-4454-926f-48766e48236c nodeName:}" failed. No retries permitted until 2026-01-30 10:27:28.417953665 +0000 UTC m=+952.984257489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert") pod "infra-operator-controller-manager-79955696d6-t5j55" (UID: "e420c57f-7248-4454-926f-48766e48236c") : secret "infra-operator-webhook-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: I0130 10:27:20.924668 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.925014 4984 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:20 crc kubenswrapper[4984]: E0130 10:27:20.925099 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert podName:8d22f0a7-a541-405b-8146-fb098d02ddcc nodeName:}" failed. No retries permitted until 2026-01-30 10:27:28.925079555 +0000 UTC m=+953.491383379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" (UID: "8d22f0a7-a541-405b-8146-fb098d02ddcc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: I0130 10:27:21.227511 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:21 crc kubenswrapper[4984]: I0130 10:27:21.227613 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227727 4984 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227836 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227858 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:29.227813834 +0000 UTC m=+953.794117658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "metrics-server-cert" not found Jan 30 10:27:21 crc kubenswrapper[4984]: E0130 10:27:21.227931 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:29.227911667 +0000 UTC m=+953.794215491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.430782 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.446171 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e420c57f-7248-4454-926f-48766e48236c-cert\") pod \"infra-operator-controller-manager-79955696d6-t5j55\" (UID: \"e420c57f-7248-4454-926f-48766e48236c\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.499379 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:28 crc kubenswrapper[4984]: E0130 10:27:28.677882 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 10:27:28 crc kubenswrapper[4984]: E0130 10:27:28.678050 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9nc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-zwc2t_openstack-operators(dd895dbf-b809-498c-95fd-dfd09a9eeb4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:28 crc kubenswrapper[4984]: E0130 10:27:28.679373 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" podUID="dd895dbf-b809-498c-95fd-dfd09a9eeb4d" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.941695 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:28 crc kubenswrapper[4984]: I0130 10:27:28.944673 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d22f0a7-a541-405b-8146-fb098d02ddcc-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4df2g45\" (UID: \"8d22f0a7-a541-405b-8146-fb098d02ddcc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.023331 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.254512 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.254649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.254722 4984 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.254811 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs podName:87613c07-d864-4440-b31c-03c4bb3f8ce0 nodeName:}" failed. No retries permitted until 2026-01-30 10:27:45.254789738 +0000 UTC m=+969.821093632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs") pod "openstack-operator-controller-manager-8d786f48c-jtznv" (UID: "87613c07-d864-4440-b31c-03c4bb3f8ce0") : secret "webhook-server-cert" not found Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.258628 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-metrics-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.416590 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.417148 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdbfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vpt86_openstack-operators(e8bf6651-ff58-478c-be28-39732dac675b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.419275 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" podUID="e8bf6651-ff58-478c-be28-39732dac675b" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.714572 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" podUID="dd895dbf-b809-498c-95fd-dfd09a9eeb4d" Jan 30 10:27:29 crc kubenswrapper[4984]: E0130 10:27:29.714589 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" podUID="e8bf6651-ff58-478c-be28-39732dac675b" Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.778374 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45"] Jan 30 10:27:29 crc kubenswrapper[4984]: I0130 10:27:29.892123 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t5j55"] Jan 30 10:27:29 crc kubenswrapper[4984]: W0130 10:27:29.936525 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode420c57f_7248_4454_926f_48766e48236c.slice/crio-bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8 WatchSource:0}: Error finding container bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8: Status 404 returned error can't find the container with id bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8 Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.710880 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" event={"ID":"c6ee91ae-9b91-46a7-ad2a-c67133a4f40e","Type":"ContainerStarted","Data":"c810d9f7c7b37754bfd9d25467e388b6d651887fb467b51ebc2dbc2c5ad76d71"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.711192 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.718658 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" event={"ID":"1d30b9a6-fe73-4e32-9095-65b1950f7afe","Type":"ContainerStarted","Data":"057923cc00385f2e91b782f2b3ce6480838a580079af69fec71237a6dc419f82"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.718743 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.721021 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" event={"ID":"69e058b7-deda-4eb8-9cac-6bc08032b3bf","Type":"ContainerStarted","Data":"9919b8529b7b94f0d2cd488b22d849bf8e80dab6bfb38df53250f5414de3876f"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.721139 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.727071 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" event={"ID":"739ed1d4-c090-4166-9352-d048e0b281d6","Type":"ContainerStarted","Data":"72795c3611663a828744b98b297983704d6879e7d3c83583be4f0535d10fabea"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.727210 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.731211 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" event={"ID":"8c70fc0b-a348-4dcd-8fc3-9afa1c22318e","Type":"ContainerStarted","Data":"b18d5a1e60121e2a1f8080ee04c697c414af14214cd834e17fb5d8559554d280"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.731267 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.734242 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" event={"ID":"3899fe05-64bb-48b9-88dc-2341ad9bc00b","Type":"ContainerStarted","Data":"ae3da3c684895e06c54e2cad65bb8a4cd315abd38d39d1b7f4eee622f0a79715"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.734374 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.735271 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" podStartSLOduration=4.246318389 podStartE2EDuration="18.73523572s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.169363185 +0000 UTC m=+938.735667009" lastFinishedPulling="2026-01-30 10:27:28.658280516 +0000 UTC m=+953.224584340" observedRunningTime="2026-01-30 10:27:30.731631095 +0000 UTC m=+955.297934919" watchObservedRunningTime="2026-01-30 10:27:30.73523572 +0000 UTC m=+955.301539544" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.742466 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" event={"ID":"254d2d7e-3636-429d-b043-501d76db73e9","Type":"ContainerStarted","Data":"55ea3a2fd2999264ef3e83df56f31ba705b340a7780375b4b7ec5e465ddf58d6"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.742607 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.747640 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" podStartSLOduration=4.03516058 podStartE2EDuration="18.747623516s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.947745131 +0000 UTC m=+938.514048955" lastFinishedPulling="2026-01-30 10:27:28.660208067 +0000 UTC m=+953.226511891" observedRunningTime="2026-01-30 10:27:30.747430531 +0000 UTC m=+955.313734365" watchObservedRunningTime="2026-01-30 10:27:30.747623516 +0000 UTC m=+955.313927340" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.755630 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" event={"ID":"5d977367-099f-4a10-bf37-9e9cd913932e","Type":"ContainerStarted","Data":"34c327ae5e06300019eb8678bd2841914b3258d52de175df2be4021f271d1283"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.755750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.758565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" event={"ID":"9a53674a-07ad-4bfc-80c8-f55bcc286eb0","Type":"ContainerStarted","Data":"251e4a76d7e4cac0b436f06c235497525fdc90d721bc7932a4578af4410332c0"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.758997 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.771829 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" podStartSLOduration=3.088797299 podStartE2EDuration="18.771812183s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.698568472 +0000 UTC m=+938.264872286" lastFinishedPulling="2026-01-30 10:27:29.381583306 +0000 UTC m=+953.947887170" observedRunningTime="2026-01-30 10:27:30.768598809 +0000 UTC m=+955.334902633" watchObservedRunningTime="2026-01-30 10:27:30.771812183 +0000 UTC m=+955.338116007" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.801938 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" podStartSLOduration=3.519631501 podStartE2EDuration="18.801920526s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.099232029 +0000 UTC m=+938.665535853" lastFinishedPulling="2026-01-30 10:27:29.381521054 +0000 UTC m=+953.947824878" observedRunningTime="2026-01-30 10:27:30.80092748 +0000 UTC m=+955.367231304" watchObservedRunningTime="2026-01-30 10:27:30.801920526 +0000 UTC m=+955.368224360" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.819349 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" event={"ID":"7a6dd1f5-d0b6-49a6-9270-dd98f2147932","Type":"ContainerStarted","Data":"7a144195e705916159414100ed81046290642b8fb85fa4bb467ba9f97c474f09"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.820385 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.849835 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" event={"ID":"8d22f0a7-a541-405b-8146-fb098d02ddcc","Type":"ContainerStarted","Data":"013825f773fbc5742f79d7470bdf6d6f2fd898b0335d04e5b8e0d45b25964af2"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.851168 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" event={"ID":"5e7c3856-3562-4cb4-b131-48302c43ce25","Type":"ContainerStarted","Data":"95e8b6b081f0ca05e5be08fc430949478be2e4c08f82c2f80cf963627e1476ab"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.852017 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.857957 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" event={"ID":"e420c57f-7248-4454-926f-48766e48236c","Type":"ContainerStarted","Data":"bb4f85c3ae053c67424955eb38536166b277888ef791be18b4e170abf45b76c8"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.859020 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" event={"ID":"67a8ae49-7f19-47bc-8e54-0873c535f6ff","Type":"ContainerStarted","Data":"6bf15dac9dacb4c9039d9909d4d4395ac1b4ae61587e0c6c716280d6534f0869"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.859447 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.875639 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" podStartSLOduration=3.606670962 podStartE2EDuration="18.875617836s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.166387217 +0000 UTC m=+938.732691041" lastFinishedPulling="2026-01-30 10:27:29.435334091 +0000 UTC m=+954.001637915" observedRunningTime="2026-01-30 10:27:30.848389719 +0000 UTC m=+955.414693553" watchObservedRunningTime="2026-01-30 10:27:30.875617836 +0000 UTC m=+955.441921660" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.898576 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" event={"ID":"74bafe89-dc08-4029-823c-f0c3579b8d6b","Type":"ContainerStarted","Data":"465b51a4342cb655be167ec4d161daf299fffd38bcce758b4e92cade0e91eacf"} Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.899209 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.903222 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" podStartSLOduration=3.052173055 podStartE2EDuration="18.903208582s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.530430246 +0000 UTC m=+938.096734070" lastFinishedPulling="2026-01-30 10:27:29.381465773 +0000 UTC m=+953.947769597" observedRunningTime="2026-01-30 10:27:30.90047451 +0000 UTC m=+955.466778334" watchObservedRunningTime="2026-01-30 10:27:30.903208582 +0000 UTC m=+955.469512406" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.955700 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" podStartSLOduration=3.2467186359999998 podStartE2EDuration="18.955683043s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.686756501 +0000 UTC m=+938.253060315" lastFinishedPulling="2026-01-30 10:27:29.395720858 +0000 UTC m=+953.962024722" observedRunningTime="2026-01-30 10:27:30.931453906 +0000 UTC m=+955.497757730" watchObservedRunningTime="2026-01-30 10:27:30.955683043 +0000 UTC m=+955.521986887" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.956820 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" podStartSLOduration=3.226950135 podStartE2EDuration="18.956815003s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.664405162 +0000 UTC m=+938.230708986" lastFinishedPulling="2026-01-30 10:27:29.39426999 +0000 UTC m=+953.960573854" observedRunningTime="2026-01-30 10:27:30.953631399 +0000 UTC m=+955.519935223" watchObservedRunningTime="2026-01-30 10:27:30.956815003 +0000 UTC m=+955.523118827" Jan 30 10:27:30 crc kubenswrapper[4984]: I0130 10:27:30.994803 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" podStartSLOduration=3.916940309 podStartE2EDuration="18.994789913s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.339552975 +0000 UTC m=+938.905856799" lastFinishedPulling="2026-01-30 10:27:29.417402579 +0000 UTC m=+953.983706403" observedRunningTime="2026-01-30 10:27:30.992039621 +0000 UTC m=+955.558343445" watchObservedRunningTime="2026-01-30 10:27:30.994789913 +0000 UTC m=+955.561093737" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.031266 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" podStartSLOduration=3.565738134 podStartE2EDuration="19.031235992s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.928823173 +0000 UTC m=+938.495126987" lastFinishedPulling="2026-01-30 10:27:29.394321021 +0000 UTC m=+953.960624845" observedRunningTime="2026-01-30 10:27:31.027181656 +0000 UTC m=+955.593485480" watchObservedRunningTime="2026-01-30 10:27:31.031235992 +0000 UTC m=+955.597539816" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.061911 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" podStartSLOduration=3.940045598 podStartE2EDuration="19.06189645s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.536359822 +0000 UTC m=+938.102663656" lastFinishedPulling="2026-01-30 10:27:28.658210684 +0000 UTC m=+953.224514508" observedRunningTime="2026-01-30 10:27:31.060811161 +0000 UTC m=+955.627114985" watchObservedRunningTime="2026-01-30 10:27:31.06189645 +0000 UTC m=+955.628200274" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.095347 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" podStartSLOduration=3.8304897540000002 podStartE2EDuration="19.09533117s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.116656788 +0000 UTC m=+938.682960612" lastFinishedPulling="2026-01-30 10:27:29.381498204 +0000 UTC m=+953.947802028" observedRunningTime="2026-01-30 10:27:31.09193364 +0000 UTC m=+955.658237464" watchObservedRunningTime="2026-01-30 10:27:31.09533117 +0000 UTC m=+955.661634994" Jan 30 10:27:31 crc kubenswrapper[4984]: I0130 10:27:31.141220 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" podStartSLOduration=4.286296101 podStartE2EDuration="19.141201477s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.803296418 +0000 UTC m=+938.369600242" lastFinishedPulling="2026-01-30 10:27:28.658201794 +0000 UTC m=+953.224505618" observedRunningTime="2026-01-30 10:27:31.137418338 +0000 UTC m=+955.703722152" watchObservedRunningTime="2026-01-30 10:27:31.141201477 +0000 UTC m=+955.707505301" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.000756 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001128 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001178 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001894 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:27:33 crc kubenswrapper[4984]: I0130 10:27:33.001960 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e" gracePeriod=600 Jan 30 10:27:35 crc kubenswrapper[4984]: I0130 10:27:35.937294 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e" exitCode=0 Jan 30 10:27:35 crc kubenswrapper[4984]: I0130 10:27:35.937607 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e"} Jan 30 10:27:35 crc kubenswrapper[4984]: I0130 10:27:35.937683 4984 scope.go:117] "RemoveContainer" containerID="fe54118d6b2dc91521b65835c2eeaaa1795ea49993d1e6422219064328999f71" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.738634 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.739564 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-r7hs4_openstack-operators(df5d4f32-b49b-46ea-8aac-a3b76b2f8f00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.740876 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.746975 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-sxpfj" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.791336 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-cnxbk" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.825090 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-b674n" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.838379 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-tjfpn" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.860546 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-zl2fj" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.896331 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-zzd6d" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.916519 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.916719 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fjvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-jvcvp_openstack-operators(c3eec896-3441-4b0e-a7e5-4bde717dbccd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:42 crc kubenswrapper[4984]: E0130 10:27:42.918092 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:42 crc kubenswrapper[4984]: I0130 10:27:42.928404 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-8hrrf" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.129214 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2wvrh" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.148710 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-t75dn" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.192269 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-2tbcn" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.294715 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-sh7cp" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.413899 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-fx6t9" Jan 30 10:27:43 crc kubenswrapper[4984]: I0130 10:27:43.615846 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-h7pcb" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.023095 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.023639 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tsr5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-28kkh_openstack-operators(bb50c219-6036-48d0-8568-0a1601150272): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.025194 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:44 crc kubenswrapper[4984]: I0130 10:27:44.107983 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.585750 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.585967 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xrb2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-4lz58_openstack-operators(350834d1-9352-4ca5-9c8a-acf60193ebc8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:27:44 crc kubenswrapper[4984]: E0130 10:27:44.587308 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.329870 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.340645 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87613c07-d864-4440-b31c-03c4bb3f8ce0-webhook-certs\") pod \"openstack-operator-controller-manager-8d786f48c-jtznv\" (UID: \"87613c07-d864-4440-b31c-03c4bb3f8ce0\") " pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.459428 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mg6lk" Jan 30 10:27:45 crc kubenswrapper[4984]: I0130 10:27:45.465076 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:47 crc kubenswrapper[4984]: I0130 10:27:47.349275 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv"] Jan 30 10:27:47 crc kubenswrapper[4984]: W0130 10:27:47.353084 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87613c07_d864_4440_b31c_03c4bb3f8ce0.slice/crio-037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59 WatchSource:0}: Error finding container 037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59: Status 404 returned error can't find the container with id 037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59 Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.046927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" event={"ID":"ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1","Type":"ContainerStarted","Data":"0cebf764529f6291567b24ce6a6df83a4714b25e31eec21fb7fcbbe5a2a61b17"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.047377 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.048593 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" event={"ID":"87613c07-d864-4440-b31c-03c4bb3f8ce0","Type":"ContainerStarted","Data":"4c7925caf0b0545a2a7a28fbcb4b948370484815a76708bbf3d088b5e0d31c05"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.048648 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" event={"ID":"87613c07-d864-4440-b31c-03c4bb3f8ce0","Type":"ContainerStarted","Data":"037d6a52123c3a71f3a39e730bfef2b5d1ee5f5085bb5723f78e1ac12848ab59"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.048736 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.050902 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" event={"ID":"dd895dbf-b809-498c-95fd-dfd09a9eeb4d","Type":"ContainerStarted","Data":"91134e293eaeb33fcfd4b39ce2a56035a34b72c90acdf0b7fdfa6d5176510ab7"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.051122 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.052344 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" event={"ID":"e8bf6651-ff58-478c-be28-39732dac675b","Type":"ContainerStarted","Data":"1af3c14bbcc5dd5b6f35eabaa5ebd5e6526d8d84671fe55401880aed7fd18d06"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.054774 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.056354 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" event={"ID":"8d22f0a7-a541-405b-8146-fb098d02ddcc","Type":"ContainerStarted","Data":"27408c10e7f614f45e5091a02e8d6153942c14ebf77a56107d2d0c7a118e9cd4"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.057259 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.058687 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" event={"ID":"e420c57f-7248-4454-926f-48766e48236c","Type":"ContainerStarted","Data":"08c94bcdb7901a89f9ae6dc972ad76bdad1f8d0cb624d0a677f72f9f3b6bd6a6"} Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.058841 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.064490 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" podStartSLOduration=3.321453522 podStartE2EDuration="36.064470861s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.178071584 +0000 UTC m=+938.744375408" lastFinishedPulling="2026-01-30 10:27:46.921088923 +0000 UTC m=+971.487392747" observedRunningTime="2026-01-30 10:27:48.061199955 +0000 UTC m=+972.627503809" watchObservedRunningTime="2026-01-30 10:27:48.064470861 +0000 UTC m=+972.630774675" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.093765 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" podStartSLOduration=35.093742352 podStartE2EDuration="35.093742352s" podCreationTimestamp="2026-01-30 10:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:27:48.087860647 +0000 UTC m=+972.654164481" watchObservedRunningTime="2026-01-30 10:27:48.093742352 +0000 UTC m=+972.660046186" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.139444 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" podStartSLOduration=19.013649058 podStartE2EDuration="36.139426834s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:29.795357458 +0000 UTC m=+954.361661282" lastFinishedPulling="2026-01-30 10:27:46.921135234 +0000 UTC m=+971.487439058" observedRunningTime="2026-01-30 10:27:48.13279968 +0000 UTC m=+972.699103514" watchObservedRunningTime="2026-01-30 10:27:48.139426834 +0000 UTC m=+972.705730658" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.151883 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" podStartSLOduration=3.214735243 podStartE2EDuration="36.151864622s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:13.943353365 +0000 UTC m=+938.509657189" lastFinishedPulling="2026-01-30 10:27:46.880482734 +0000 UTC m=+971.446786568" observedRunningTime="2026-01-30 10:27:48.15103746 +0000 UTC m=+972.717341284" watchObservedRunningTime="2026-01-30 10:27:48.151864622 +0000 UTC m=+972.718168446" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.170879 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" podStartSLOduration=19.190531335 podStartE2EDuration="36.170860282s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:29.940774306 +0000 UTC m=+954.507078130" lastFinishedPulling="2026-01-30 10:27:46.921103253 +0000 UTC m=+971.487407077" observedRunningTime="2026-01-30 10:27:48.166640431 +0000 UTC m=+972.732944275" watchObservedRunningTime="2026-01-30 10:27:48.170860282 +0000 UTC m=+972.737164116" Jan 30 10:27:48 crc kubenswrapper[4984]: I0130 10:27:48.198518 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vpt86" podStartSLOduration=2.827026442 podStartE2EDuration="35.198484589s" podCreationTimestamp="2026-01-30 10:27:13 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.559743272 +0000 UTC m=+939.126047106" lastFinishedPulling="2026-01-30 10:27:46.931201429 +0000 UTC m=+971.497505253" observedRunningTime="2026-01-30 10:27:48.191757472 +0000 UTC m=+972.758061306" watchObservedRunningTime="2026-01-30 10:27:48.198484589 +0000 UTC m=+972.764788433" Jan 30 10:27:53 crc kubenswrapper[4984]: I0130 10:27:53.104031 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-zwc2t" Jan 30 10:27:53 crc kubenswrapper[4984]: I0130 10:27:53.256710 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-gcbx5" Jan 30 10:27:55 crc kubenswrapper[4984]: E0130 10:27:55.092594 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podUID="bb50c219-6036-48d0-8568-0a1601150272" Jan 30 10:27:55 crc kubenswrapper[4984]: I0130 10:27:55.473985 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8d786f48c-jtznv" Jan 30 10:27:56 crc kubenswrapper[4984]: E0130 10:27:56.096055 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podUID="df5d4f32-b49b-46ea-8aac-a3b76b2f8f00" Jan 30 10:27:56 crc kubenswrapper[4984]: E0130 10:27:56.096735 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podUID="c3eec896-3441-4b0e-a7e5-4bde717dbccd" Jan 30 10:27:58 crc kubenswrapper[4984]: I0130 10:27:58.506816 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t5j55" Jan 30 10:27:59 crc kubenswrapper[4984]: I0130 10:27:59.032703 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4df2g45" Jan 30 10:27:59 crc kubenswrapper[4984]: E0130 10:27:59.093596 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podUID="350834d1-9352-4ca5-9c8a-acf60193ebc8" Jan 30 10:28:09 crc kubenswrapper[4984]: I0130 10:28:09.236236 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" event={"ID":"df5d4f32-b49b-46ea-8aac-a3b76b2f8f00","Type":"ContainerStarted","Data":"8e917a20d18251cfbbd95595909c367bc476183eedd01a37805094362f38634e"} Jan 30 10:28:09 crc kubenswrapper[4984]: I0130 10:28:09.237044 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:28:09 crc kubenswrapper[4984]: I0130 10:28:09.261890 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" podStartSLOduration=2.724782623 podStartE2EDuration="57.26186538s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.17185083 +0000 UTC m=+938.738154654" lastFinishedPulling="2026-01-30 10:28:08.708933587 +0000 UTC m=+993.275237411" observedRunningTime="2026-01-30 10:28:09.250493962 +0000 UTC m=+993.816797836" watchObservedRunningTime="2026-01-30 10:28:09.26186538 +0000 UTC m=+993.828169244" Jan 30 10:28:10 crc kubenswrapper[4984]: I0130 10:28:10.245830 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" event={"ID":"bb50c219-6036-48d0-8568-0a1601150272","Type":"ContainerStarted","Data":"481082a9ce09ea61d61beec3f2fb8f04bb2e40427e325fddb2ea017e5bcc9b79"} Jan 30 10:28:10 crc kubenswrapper[4984]: I0130 10:28:10.246601 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:28:10 crc kubenswrapper[4984]: I0130 10:28:10.271124 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" podStartSLOduration=2.920296906 podStartE2EDuration="58.271097835s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.274746709 +0000 UTC m=+938.841050533" lastFinishedPulling="2026-01-30 10:28:09.625547598 +0000 UTC m=+994.191851462" observedRunningTime="2026-01-30 10:28:10.260840738 +0000 UTC m=+994.827144572" watchObservedRunningTime="2026-01-30 10:28:10.271097835 +0000 UTC m=+994.837401699" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.252864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" event={"ID":"c3eec896-3441-4b0e-a7e5-4bde717dbccd","Type":"ContainerStarted","Data":"7a1eb40f15a44d1d204404543de4b3aa80540515fb198a0230e65f5874ee4155"} Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.253491 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.254433 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" event={"ID":"350834d1-9352-4ca5-9c8a-acf60193ebc8","Type":"ContainerStarted","Data":"1cace836f7cd1c54c0cd73265d5730c3fa089f2c5f9db3dcc006af664f7e99ba"} Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.254751 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.274435 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" podStartSLOduration=3.075359847 podStartE2EDuration="59.27442232s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.286971481 +0000 UTC m=+938.853275315" lastFinishedPulling="2026-01-30 10:28:10.486033964 +0000 UTC m=+995.052337788" observedRunningTime="2026-01-30 10:28:11.270872754 +0000 UTC m=+995.837176578" watchObservedRunningTime="2026-01-30 10:28:11.27442232 +0000 UTC m=+995.840726134" Jan 30 10:28:11 crc kubenswrapper[4984]: I0130 10:28:11.294871 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" podStartSLOduration=3.054818123 podStartE2EDuration="59.294842942s" podCreationTimestamp="2026-01-30 10:27:12 +0000 UTC" firstStartedPulling="2026-01-30 10:27:14.357931769 +0000 UTC m=+938.924235593" lastFinishedPulling="2026-01-30 10:28:10.597956588 +0000 UTC m=+995.164260412" observedRunningTime="2026-01-30 10:28:11.292703224 +0000 UTC m=+995.859007048" watchObservedRunningTime="2026-01-30 10:28:11.294842942 +0000 UTC m=+995.861146766" Jan 30 10:28:13 crc kubenswrapper[4984]: I0130 10:28:13.531377 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-r7hs4" Jan 30 10:28:23 crc kubenswrapper[4984]: I0130 10:28:23.360922 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-28kkh" Jan 30 10:28:23 crc kubenswrapper[4984]: I0130 10:28:23.451779 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-jvcvp" Jan 30 10:28:23 crc kubenswrapper[4984]: I0130 10:28:23.587519 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4lz58" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.592067 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.596702 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601176 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601339 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601525 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.601355 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-hf6x5" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.611942 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.680754 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.682026 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.687003 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.700053 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.717457 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.717550 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819547 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819590 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819659 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.819704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.821404 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.857409 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"dnsmasq-dns-675f4bcbfc-mmlmd\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.920809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.920881 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.920964 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.921717 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.921951 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.924942 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:28:39 crc kubenswrapper[4984]: I0130 10:28:39.936325 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"dnsmasq-dns-78dd6ddcc-mvnjm\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.001515 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.406832 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:40 crc kubenswrapper[4984]: W0130 10:28:40.416524 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d1746bb_5861_4f20_a9d0_af3129baffd4.slice/crio-f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e WatchSource:0}: Error finding container f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e: Status 404 returned error can't find the container with id f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.476029 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" event={"ID":"7d1746bb-5861-4f20-a9d0-af3129baffd4","Type":"ContainerStarted","Data":"f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e"} Jan 30 10:28:40 crc kubenswrapper[4984]: W0130 10:28:40.516725 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb253c369_a41e_47cb_af7e_0ca288023264.slice/crio-8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a WatchSource:0}: Error finding container 8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a: Status 404 returned error can't find the container with id 8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a Jan 30 10:28:40 crc kubenswrapper[4984]: I0130 10:28:40.519406 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:41 crc kubenswrapper[4984]: I0130 10:28:41.486577 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" event={"ID":"b253c369-a41e-47cb-af7e-0ca288023264","Type":"ContainerStarted","Data":"8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a"} Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.181489 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.204794 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.205800 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.229890 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.367082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.367274 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.367447 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.468569 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.468646 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.468708 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.469558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.470053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.488046 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.505375 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"dnsmasq-dns-666b6646f7-nt5m5\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.524663 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.526293 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.537789 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.544200 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.678480 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.678579 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.678663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.780631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.781102 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.781134 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.785378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.788019 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.805082 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"dnsmasq-dns-57d769cc4f-22gp8\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:42 crc kubenswrapper[4984]: I0130 10:28:42.851619 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.104072 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.366514 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.368113 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.372550 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4bdkz" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.373355 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.374860 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375372 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375418 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375459 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.375855 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.382287 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.420232 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:28:43 crc kubenswrapper[4984]: W0130 10:28:43.421185 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf42e13a3_aadb_4dc7_aabb_5a769e2b0e2d.slice/crio-9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6 WatchSource:0}: Error finding container 9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6: Status 404 returned error can't find the container with id 9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6 Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525769 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525883 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525906 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525936 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.525957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526091 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526226 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526306 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.526334 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.536583 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerStarted","Data":"ca1bcc36301121abb73f0c33aefe22f5b51a7c08d43dfde31a9e2410ca9c91c2"} Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.537837 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" event={"ID":"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d","Type":"ContainerStarted","Data":"9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6"} Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627851 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627901 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627928 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627966 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.627986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628013 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628059 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628082 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628104 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628127 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628152 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.628792 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.629574 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.629800 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.630172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.630631 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.633789 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.634868 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.635561 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.636857 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.642944 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.647235 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.652625 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.655617 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.657320 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664014 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664379 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dmx9d" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664501 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664647 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664716 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.664989 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.672426 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.672539 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.698109 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832543 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832838 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832857 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832902 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832922 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832977 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.832994 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.833016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.833043 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934598 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934671 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934700 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934715 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934742 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934802 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934818 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934833 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.934860 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.935288 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.935429 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.937487 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.937961 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.938107 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.940977 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.941141 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.941270 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.946540 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.953476 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.960434 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:43 crc kubenswrapper[4984]: I0130 10:28:43.967722 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.013871 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.195461 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:28:44 crc kubenswrapper[4984]: W0130 10:28:44.217165 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0c1fc2_7876_468d_86b8_7348a8418ee9.slice/crio-bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282 WatchSource:0}: Error finding container bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282: Status 404 returned error can't find the container with id bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282 Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.452691 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.563318 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerStarted","Data":"bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282"} Jan 30 10:28:44 crc kubenswrapper[4984]: I0130 10:28:44.995604 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.009444 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.009571 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.011561 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.015857 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2wxch" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.016028 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.017639 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.018938 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165640 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165671 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165690 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165712 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165735 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165763 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.165816 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzcq\" (UniqueName: \"kubernetes.io/projected/c4717968-368b-4b9d-acca-b2aee21abd1f-kube-api-access-wlzcq\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.266996 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlzcq\" (UniqueName: \"kubernetes.io/projected/c4717968-368b-4b9d-acca-b2aee21abd1f-kube-api-access-wlzcq\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267101 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267132 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267158 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267181 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267209 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267234 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.267292 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.269850 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.270000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.270024 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.270320 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c4717968-368b-4b9d-acca-b2aee21abd1f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.271798 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4717968-368b-4b9d-acca-b2aee21abd1f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.274212 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.287951 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4717968-368b-4b9d-acca-b2aee21abd1f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.291903 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlzcq\" (UniqueName: \"kubernetes.io/projected/c4717968-368b-4b9d-acca-b2aee21abd1f-kube-api-access-wlzcq\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.310137 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"c4717968-368b-4b9d-acca-b2aee21abd1f\") " pod="openstack/openstack-galera-0" Jan 30 10:28:45 crc kubenswrapper[4984]: I0130 10:28:45.336753 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.354302 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.355460 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.357281 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.357936 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.358145 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.358302 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-l87c5" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.362774 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493387 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493449 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493467 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwqv\" (UniqueName: \"kubernetes.io/projected/66296a3e-33af-496f-a870-9d0932aa4178-kube-api-access-5mwqv\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493491 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493509 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66296a3e-33af-496f-a870-9d0932aa4178-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493537 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493573 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.493588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.594639 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.594934 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwqv\" (UniqueName: \"kubernetes.io/projected/66296a3e-33af-496f-a870-9d0932aa4178-kube-api-access-5mwqv\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.594970 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595113 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66296a3e-33af-496f-a870-9d0932aa4178-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595286 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595432 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595455 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.595606 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596529 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596633 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66296a3e-33af-496f-a870-9d0932aa4178-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596774 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.596871 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.597551 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66296a3e-33af-496f-a870-9d0932aa4178-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.606799 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.613368 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66296a3e-33af-496f-a870-9d0932aa4178-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.621152 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwqv\" (UniqueName: \"kubernetes.io/projected/66296a3e-33af-496f-a870-9d0932aa4178-kube-api-access-5mwqv\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.634526 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"66296a3e-33af-496f-a870-9d0932aa4178\") " pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.669710 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.732964 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.748576 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.748688 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.752365 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-n86x5" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.752555 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.753385 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904086 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-config-data\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904458 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-kolla-config\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:46 crc kubenswrapper[4984]: I0130 10:28:46.904507 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjcr\" (UniqueName: \"kubernetes.io/projected/ab30531b-1df7-460e-956c-bc849792098b-kube-api-access-vxjcr\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010367 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-config-data\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010529 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-kolla-config\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010591 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010609 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjcr\" (UniqueName: \"kubernetes.io/projected/ab30531b-1df7-460e-956c-bc849792098b-kube-api-access-vxjcr\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.010672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.011783 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-config-data\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.011793 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ab30531b-1df7-460e-956c-bc849792098b-kolla-config\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.015706 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.021022 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab30531b-1df7-460e-956c-bc849792098b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.028020 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjcr\" (UniqueName: \"kubernetes.io/projected/ab30531b-1df7-460e-956c-bc849792098b-kube-api-access-vxjcr\") pod \"memcached-0\" (UID: \"ab30531b-1df7-460e-956c-bc849792098b\") " pod="openstack/memcached-0" Jan 30 10:28:47 crc kubenswrapper[4984]: I0130 10:28:47.112395 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.876888 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.877805 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.885180 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2q58b" Jan 30 10:28:48 crc kubenswrapper[4984]: I0130 10:28:48.894289 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.042611 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"kube-state-metrics-0\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.145696 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"kube-state-metrics-0\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.170030 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"kube-state-metrics-0\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.203672 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:28:49 crc kubenswrapper[4984]: W0130 10:28:49.475841 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d00f70a_4071_4375_81f3_45e7aab83cd3.slice/crio-9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83 WatchSource:0}: Error finding container 9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83: Status 404 returned error can't find the container with id 9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83 Jan 30 10:28:49 crc kubenswrapper[4984]: I0130 10:28:49.617660 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerStarted","Data":"9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83"} Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.197716 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-js4wt"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.199561 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.204232 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7sxg4" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.204739 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.206157 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m4spx"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.207098 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.208427 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.223239 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.233069 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-js4wt"] Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.304732 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-run\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.304787 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63184ee8-263b-4506-8844-4ae4fd2a80c7-scripts\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.304944 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-etc-ovs\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305025 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305050 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-ovn-controller-tls-certs\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2590fda-d6e0-4182-96ef-8326001108d9-scripts\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305100 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-combined-ca-bundle\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305133 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpzq\" (UniqueName: \"kubernetes.io/projected/c2590fda-d6e0-4182-96ef-8326001108d9-kube-api-access-nhpzq\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305161 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-log-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305192 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-log\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305240 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-lib\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305311 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgs95\" (UniqueName: \"kubernetes.io/projected/63184ee8-263b-4506-8844-4ae4fd2a80c7-kube-api-access-lgs95\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.305388 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407016 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-etc-ovs\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407099 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407125 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-ovn-controller-tls-certs\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407149 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2590fda-d6e0-4182-96ef-8326001108d9-scripts\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407163 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-combined-ca-bundle\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407186 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpzq\" (UniqueName: \"kubernetes.io/projected/c2590fda-d6e0-4182-96ef-8326001108d9-kube-api-access-nhpzq\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-log-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407227 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-log\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407269 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-lib\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407289 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgs95\" (UniqueName: \"kubernetes.io/projected/63184ee8-263b-4506-8844-4ae4fd2a80c7-kube-api-access-lgs95\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407601 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-run\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407637 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63184ee8-263b-4506-8844-4ae4fd2a80c7-scripts\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407665 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-etc-ovs\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.407967 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-log\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.409984 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2590fda-d6e0-4182-96ef-8326001108d9-scripts\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.409994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63184ee8-263b-4506-8844-4ae4fd2a80c7-scripts\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-log-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410128 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-lib\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410157 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/63184ee8-263b-4506-8844-4ae4fd2a80c7-var-run-ovn\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.410172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2590fda-d6e0-4182-96ef-8326001108d9-var-run\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.415872 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-combined-ca-bundle\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.422725 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/63184ee8-263b-4506-8844-4ae4fd2a80c7-ovn-controller-tls-certs\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.424216 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgs95\" (UniqueName: \"kubernetes.io/projected/63184ee8-263b-4506-8844-4ae4fd2a80c7-kube-api-access-lgs95\") pod \"ovn-controller-m4spx\" (UID: \"63184ee8-263b-4506-8844-4ae4fd2a80c7\") " pod="openstack/ovn-controller-m4spx" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.437219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpzq\" (UniqueName: \"kubernetes.io/projected/c2590fda-d6e0-4182-96ef-8326001108d9-kube-api-access-nhpzq\") pod \"ovn-controller-ovs-js4wt\" (UID: \"c2590fda-d6e0-4182-96ef-8326001108d9\") " pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.532796 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:28:52 crc kubenswrapper[4984]: I0130 10:28:52.549861 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.081716 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.083110 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.086143 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088313 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088395 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hb4nh" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088444 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.088562 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.096082 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.218847 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220144 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220211 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnf4c\" (UniqueName: \"kubernetes.io/projected/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-kube-api-access-nnf4c\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220296 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220341 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220370 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220395 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.220428 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321427 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321487 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321515 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321546 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321619 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321685 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnf4c\" (UniqueName: \"kubernetes.io/projected/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-kube-api-access-nnf4c\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.321732 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.322305 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.322650 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.322989 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.323909 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.325832 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.335063 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.342514 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.344149 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.350209 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnf4c\" (UniqueName: \"kubernetes.io/projected/ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4-kube-api-access-nnf4c\") pod \"ovsdbserver-nb-0\" (UID: \"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4\") " pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:53 crc kubenswrapper[4984]: I0130 10:28:53.423096 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.336620 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.338718 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341050 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341398 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wgztt" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341797 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.341981 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.343433 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.496991 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497080 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497146 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497174 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497408 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjbk\" (UniqueName: \"kubernetes.io/projected/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-kube-api-access-qsjbk\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497526 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.497576 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.599833 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.599937 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.599999 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600066 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjbk\" (UniqueName: \"kubernetes.io/projected/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-kube-api-access-qsjbk\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600133 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600465 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.600889 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.601159 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.601447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.601809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.608179 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.614981 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.616141 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.621895 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjbk\" (UniqueName: \"kubernetes.io/projected/53bd6a11-6ac6-4b0e-ae41-8afd88f351e6-kube-api-access-qsjbk\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.623483 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6\") " pod="openstack/ovsdbserver-sb-0" Jan 30 10:28:56 crc kubenswrapper[4984]: I0130 10:28:56.663542 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.848031 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.848460 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq552,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-22gp8_openstack(f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.849675 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.860737 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.860925 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8hst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mvnjm_openstack(7d1746bb-5861-4f20-a9d0-af3129baffd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:03 crc kubenswrapper[4984]: E0130 10:29:03.862178 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" podUID="7d1746bb-5861-4f20-a9d0-af3129baffd4" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.716357 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.996888 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.997576 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5v5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-nt5m5_openstack(8939f3c8-3f71-4369-b30a-1ce52517ec33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:04 crc kubenswrapper[4984]: E0130 10:29:04.999216 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.000558 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.001084 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-78qfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mmlmd_openstack(b253c369-a41e-47cb-af7e-0ca288023264): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.002299 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" podUID="b253c369-a41e-47cb-af7e-0ca288023264" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.286053 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.372444 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") pod \"7d1746bb-5861-4f20-a9d0-af3129baffd4\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.372501 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") pod \"7d1746bb-5861-4f20-a9d0-af3129baffd4\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.372543 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") pod \"7d1746bb-5861-4f20-a9d0-af3129baffd4\" (UID: \"7d1746bb-5861-4f20-a9d0-af3129baffd4\") " Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.373231 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config" (OuterVolumeSpecName: "config") pod "7d1746bb-5861-4f20-a9d0-af3129baffd4" (UID: "7d1746bb-5861-4f20-a9d0-af3129baffd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.376643 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d1746bb-5861-4f20-a9d0-af3129baffd4" (UID: "7d1746bb-5861-4f20-a9d0-af3129baffd4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.382945 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst" (OuterVolumeSpecName: "kube-api-access-h8hst") pod "7d1746bb-5861-4f20-a9d0-af3129baffd4" (UID: "7d1746bb-5861-4f20-a9d0-af3129baffd4"). InnerVolumeSpecName "kube-api-access-h8hst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.473784 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hst\" (UniqueName: \"kubernetes.io/projected/7d1746bb-5861-4f20-a9d0-af3129baffd4-kube-api-access-h8hst\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.473821 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.473837 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1746bb-5861-4f20-a9d0-af3129baffd4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.718969 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.727672 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.728498 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" event={"ID":"7d1746bb-5861-4f20-a9d0-af3129baffd4","Type":"ContainerDied","Data":"f64b4bc308a8f5872d963033310f4556a8cf3c5ba649ce1b67ed488a05854a2e"} Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.728580 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mvnjm" Jan 30 10:29:05 crc kubenswrapper[4984]: E0130 10:29:05.731545 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.736658 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.744418 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.851276 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.853816 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mvnjm"] Jan 30 10:29:05 crc kubenswrapper[4984]: I0130 10:29:05.892426 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 10:29:05 crc kubenswrapper[4984]: W0130 10:29:05.987384 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53bd6a11_6ac6_4b0e_ae41_8afd88f351e6.slice/crio-17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69 WatchSource:0}: Error finding container 17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69: Status 404 returned error can't find the container with id 17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69 Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.110979 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1746bb-5861-4f20-a9d0-af3129baffd4" path="/var/lib/kubelet/pods/7d1746bb-5861-4f20-a9d0-af3129baffd4/volumes" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.163309 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx"] Jan 30 10:29:06 crc kubenswrapper[4984]: W0130 10:29:06.182890 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63184ee8_263b_4506_8844_4ae4fd2a80c7.slice/crio-0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057 WatchSource:0}: Error finding container 0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057: Status 404 returned error can't find the container with id 0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057 Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.347849 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.356154 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.404795 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") pod \"b253c369-a41e-47cb-af7e-0ca288023264\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.404956 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") pod \"b253c369-a41e-47cb-af7e-0ca288023264\" (UID: \"b253c369-a41e-47cb-af7e-0ca288023264\") " Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.405915 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config" (OuterVolumeSpecName: "config") pod "b253c369-a41e-47cb-af7e-0ca288023264" (UID: "b253c369-a41e-47cb-af7e-0ca288023264"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.412939 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk" (OuterVolumeSpecName: "kube-api-access-78qfk") pod "b253c369-a41e-47cb-af7e-0ca288023264" (UID: "b253c369-a41e-47cb-af7e-0ca288023264"). InnerVolumeSpecName "kube-api-access-78qfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.418207 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-js4wt"] Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.507352 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78qfk\" (UniqueName: \"kubernetes.io/projected/b253c369-a41e-47cb-af7e-0ca288023264-kube-api-access-78qfk\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.507657 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b253c369-a41e-47cb-af7e-0ca288023264-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:06 crc kubenswrapper[4984]: W0130 10:29:06.508640 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2590fda_d6e0_4182_96ef_8326001108d9.slice/crio-eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1 WatchSource:0}: Error finding container eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1: Status 404 returned error can't find the container with id eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1 Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.736955 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerStarted","Data":"f05fd5917bae61700291c3765574cc3a3b08139624adb6fb3ccd5f7058c55fa6"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.738642 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerStarted","Data":"d7558e6899b57453defe5a8f8e0b329c15d0ab1ed9c92562f548313b7fd4ee8d"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.740026 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab30531b-1df7-460e-956c-bc849792098b","Type":"ContainerStarted","Data":"dbeae79b557a4d5d76adcfd6e3c34ce51742ab31d0cd794bd43ac8a4800773e7"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.741146 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" event={"ID":"b253c369-a41e-47cb-af7e-0ca288023264","Type":"ContainerDied","Data":"8d1a5070ea24d4afaa53ef61b81fb3d4688a8b2d17b684e683b566ba49e1243a"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.741222 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mmlmd" Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.759743 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6","Type":"ContainerStarted","Data":"17a8133c2335f8dd4b4760d66b66f9047406fe492dc811dfff1cc0ca72121a69"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.762081 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerStarted","Data":"eff677dd09e0383d106502bd39e945d4fd928a4c1720d4becf0f498ba4e46dd1"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.768134 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerStarted","Data":"627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.770435 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerStarted","Data":"c5002e46986f6f4c451f9468cadd7f7ba7e729210f83d0f4878b290572656b1c"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.772489 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerStarted","Data":"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.777220 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx" event={"ID":"63184ee8-263b-4506-8844-4ae4fd2a80c7","Type":"ContainerStarted","Data":"0f1cecddf7931637e4df3ea787e37db2329ac6bf0f1c4ce46cacc47ad680b057"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.781383 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4","Type":"ContainerStarted","Data":"f233401272b163e48dbfe75166acec0b6153e73e21dd1ce1fe5fe3a64aef2476"} Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.802386 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:29:06 crc kubenswrapper[4984]: I0130 10:29:06.807582 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mmlmd"] Jan 30 10:29:08 crc kubenswrapper[4984]: I0130 10:29:08.100235 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b253c369-a41e-47cb-af7e-0ca288023264" path="/var/lib/kubelet/pods/b253c369-a41e-47cb-af7e-0ca288023264/volumes" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.844345 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx" event={"ID":"63184ee8-263b-4506-8844-4ae4fd2a80c7","Type":"ContainerStarted","Data":"295e903f6c433d1b4cc0fab8c84b49aafdbbd5dd9e51ac452ad347ad8a6d5804"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.845672 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-m4spx" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.847055 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ab30531b-1df7-460e-956c-bc849792098b","Type":"ContainerStarted","Data":"af5568469e16ed4d5014b0cc648521c552336374ab29b1f325282d9543145451"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.847596 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.850348 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4","Type":"ContainerStarted","Data":"a9549e086d365a8c2c24259a2c7d9d56ea980f46e6d4c7af42d444de5cb1f4d6"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.852961 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6","Type":"ContainerStarted","Data":"1471d69f10e0783bcf37f9e8756fb9a0077c5bfb22b7ff96a50716dd478c6f8b"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.855894 4984 generic.go:334] "Generic (PLEG): container finished" podID="c2590fda-d6e0-4182-96ef-8326001108d9" containerID="9d4349be33beb7e3ee06b04f992d8ded7fcf2e6cf09b9d5e05f0eebd74d32355" exitCode=0 Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.856031 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerDied","Data":"9d4349be33beb7e3ee06b04f992d8ded7fcf2e6cf09b9d5e05f0eebd74d32355"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.859867 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerStarted","Data":"a107e2db3ea7fc6f394e67a766834a0581b7be8428c4fdc44437d53c65ecc69a"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.863881 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerStarted","Data":"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.864504 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.867571 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerStarted","Data":"f4fb50e73d0b56b05a1b621ee974af085fa179029c705501759af7b338c19d68"} Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.880374 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-m4spx" podStartSLOduration=15.103284748 podStartE2EDuration="21.88033621s" podCreationTimestamp="2026-01-30 10:28:52 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.197426197 +0000 UTC m=+1050.763730021" lastFinishedPulling="2026-01-30 10:29:12.974477609 +0000 UTC m=+1057.540781483" observedRunningTime="2026-01-30 10:29:13.870215947 +0000 UTC m=+1058.436519821" watchObservedRunningTime="2026-01-30 10:29:13.88033621 +0000 UTC m=+1058.446640074" Jan 30 10:29:13 crc kubenswrapper[4984]: I0130 10:29:13.969894 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.661802608 podStartE2EDuration="25.9698614s" podCreationTimestamp="2026-01-30 10:28:48 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.72388684 +0000 UTC m=+1050.290190674" lastFinishedPulling="2026-01-30 10:29:13.031945632 +0000 UTC m=+1057.598249466" observedRunningTime="2026-01-30 10:29:13.951885314 +0000 UTC m=+1058.518189168" watchObservedRunningTime="2026-01-30 10:29:13.9698614 +0000 UTC m=+1058.536165264" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.011847 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.228699738 podStartE2EDuration="28.011813693s" podCreationTimestamp="2026-01-30 10:28:46 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.757309964 +0000 UTC m=+1050.323613788" lastFinishedPulling="2026-01-30 10:29:12.540423909 +0000 UTC m=+1057.106727743" observedRunningTime="2026-01-30 10:29:14.009955793 +0000 UTC m=+1058.576259637" watchObservedRunningTime="2026-01-30 10:29:14.011813693 +0000 UTC m=+1058.578117527" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.882946 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerStarted","Data":"d8f0feffdfd7574a9a8180a31875276b60a877836a0b9fb1fbeaca48e0525762"} Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.882983 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-js4wt" event={"ID":"c2590fda-d6e0-4182-96ef-8326001108d9","Type":"ContainerStarted","Data":"c5fb323d6241bd3adc240c025ac0452faeea862ad83f4407e1c834d1c021318a"} Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.883013 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.883030 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:14 crc kubenswrapper[4984]: I0130 10:29:14.912044 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-js4wt" podStartSLOduration=16.46240751 podStartE2EDuration="22.912026702s" podCreationTimestamp="2026-01-30 10:28:52 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.513443148 +0000 UTC m=+1051.079746972" lastFinishedPulling="2026-01-30 10:29:12.96306234 +0000 UTC m=+1057.529366164" observedRunningTime="2026-01-30 10:29:14.910162332 +0000 UTC m=+1059.476466146" watchObservedRunningTime="2026-01-30 10:29:14.912026702 +0000 UTC m=+1059.478330516" Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.901174 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4","Type":"ContainerStarted","Data":"c68f05a2b480f4ae854f8b2225307b3c4c1f995e37322d7ecdb36aacfd1dba7c"} Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.907677 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"53bd6a11-6ac6-4b0e-ae41-8afd88f351e6","Type":"ContainerStarted","Data":"c2df16dd3118c3ba5aff82621c2efd91e81fefc5fbe2d19e285e4a56a869f07d"} Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.953553 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.820488816 podStartE2EDuration="23.953524119s" podCreationTimestamp="2026-01-30 10:28:52 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.428519943 +0000 UTC m=+1050.994823767" lastFinishedPulling="2026-01-30 10:29:15.561555246 +0000 UTC m=+1060.127859070" observedRunningTime="2026-01-30 10:29:15.934576907 +0000 UTC m=+1060.500880741" watchObservedRunningTime="2026-01-30 10:29:15.953524119 +0000 UTC m=+1060.519827963" Jan 30 10:29:15 crc kubenswrapper[4984]: I0130 10:29:15.973117 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.411352038 podStartE2EDuration="20.973090818s" podCreationTimestamp="2026-01-30 10:28:55 +0000 UTC" firstStartedPulling="2026-01-30 10:29:06.01252755 +0000 UTC m=+1050.578831374" lastFinishedPulling="2026-01-30 10:29:15.57426633 +0000 UTC m=+1060.140570154" observedRunningTime="2026-01-30 10:29:15.969670116 +0000 UTC m=+1060.535973970" watchObservedRunningTime="2026-01-30 10:29:15.973090818 +0000 UTC m=+1060.539394652" Jan 30 10:29:16 crc kubenswrapper[4984]: I0130 10:29:16.664810 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:16 crc kubenswrapper[4984]: I0130 10:29:16.922308 4984 generic.go:334] "Generic (PLEG): container finished" podID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" exitCode=0 Jan 30 10:29:16 crc kubenswrapper[4984]: I0130 10:29:16.923537 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerDied","Data":"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.423750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.496667 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.664128 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.738053 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.934057 4984 generic.go:334] "Generic (PLEG): container finished" podID="c4717968-368b-4b9d-acca-b2aee21abd1f" containerID="f4fb50e73d0b56b05a1b621ee974af085fa179029c705501759af7b338c19d68" exitCode=0 Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.934187 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerDied","Data":"f4fb50e73d0b56b05a1b621ee974af085fa179029c705501759af7b338c19d68"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.939516 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerStarted","Data":"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.940027 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.949326 4984 generic.go:334] "Generic (PLEG): container finished" podID="66296a3e-33af-496f-a870-9d0932aa4178" containerID="a107e2db3ea7fc6f394e67a766834a0581b7be8428c4fdc44437d53c65ecc69a" exitCode=0 Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.949370 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerDied","Data":"a107e2db3ea7fc6f394e67a766834a0581b7be8428c4fdc44437d53c65ecc69a"} Jan 30 10:29:17 crc kubenswrapper[4984]: I0130 10:29:17.950497 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.003862 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podStartSLOduration=2.637354001 podStartE2EDuration="36.003838579s" podCreationTimestamp="2026-01-30 10:28:42 +0000 UTC" firstStartedPulling="2026-01-30 10:28:43.168390541 +0000 UTC m=+1027.734694365" lastFinishedPulling="2026-01-30 10:29:16.534875119 +0000 UTC m=+1061.101178943" observedRunningTime="2026-01-30 10:29:18.000664083 +0000 UTC m=+1062.566967947" watchObservedRunningTime="2026-01-30 10:29:18.003838579 +0000 UTC m=+1062.570142413" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.484742 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.794410 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.846346 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ms66d"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.848533 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.855781 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.865112 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.867231 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.873054 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.877896 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.893363 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ms66d"] Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948109 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948173 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948199 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-combined-ca-bundle\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948225 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4r5\" (UniqueName: \"kubernetes.io/projected/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-kube-api-access-7x4r5\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948273 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovs-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948317 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948335 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovn-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948365 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-config\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.948391 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.967047 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66296a3e-33af-496f-a870-9d0932aa4178","Type":"ContainerStarted","Data":"02f0662bf68eae537cc49b2c3b8ac8f5215a58dc0cced5a9f890f2fda42996ff"} Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.972341 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c4717968-368b-4b9d-acca-b2aee21abd1f","Type":"ContainerStarted","Data":"64848261151fdbc4cd8e571028717ed6924c136b8ae75bd564d260cc390d517c"} Jan 30 10:29:18 crc kubenswrapper[4984]: I0130 10:29:18.992496 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.772420484 podStartE2EDuration="33.992465077s" podCreationTimestamp="2026-01-30 10:28:45 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.743050638 +0000 UTC m=+1050.309354462" lastFinishedPulling="2026-01-30 10:29:12.963095201 +0000 UTC m=+1057.529399055" observedRunningTime="2026-01-30 10:29:18.988470259 +0000 UTC m=+1063.554774083" watchObservedRunningTime="2026-01-30 10:29:18.992465077 +0000 UTC m=+1063.558768901" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.015982 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.782402534 podStartE2EDuration="36.015964192s" podCreationTimestamp="2026-01-30 10:28:43 +0000 UTC" firstStartedPulling="2026-01-30 10:29:05.74273191 +0000 UTC m=+1050.309035734" lastFinishedPulling="2026-01-30 10:29:12.976293568 +0000 UTC m=+1057.542597392" observedRunningTime="2026-01-30 10:29:19.01069786 +0000 UTC m=+1063.577001684" watchObservedRunningTime="2026-01-30 10:29:19.015964192 +0000 UTC m=+1063.582268006" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.019133 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049475 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049527 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-combined-ca-bundle\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049560 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4r5\" (UniqueName: \"kubernetes.io/projected/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-kube-api-access-7x4r5\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049592 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovs-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049624 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049642 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovn-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049689 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-config\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.049749 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.051698 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.053608 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovs-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.054161 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.055111 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-config\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.055148 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.055219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-ovn-rundir\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.066479 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-combined-ca-bundle\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.077921 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"dnsmasq-dns-5bf47b49b7-pb99t\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.084673 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4r5\" (UniqueName: \"kubernetes.io/projected/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-kube-api-access-7x4r5\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.086812 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbcc0b77-42fd-47ec-9b91-94e2c070c0ec-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ms66d\" (UID: \"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec\") " pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.142518 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.162311 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.166701 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.170921 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.173737 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ms66d" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.191117 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.203004 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.232074 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.262505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.262615 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.263716 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.263780 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.264284 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.292596 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.366955 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") pod \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.367011 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") pod \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.369328 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") pod \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\" (UID: \"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d\") " Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.369408 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config" (OuterVolumeSpecName: "config") pod "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" (UID: "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.369778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" (UID: "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.372935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.371243 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.374067 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552" (OuterVolumeSpecName: "kube-api-access-kq552") pod "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" (UID: "f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"). InnerVolumeSpecName "kube-api-access-kq552". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375072 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375534 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375640 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375710 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375724 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.375738 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq552\" (UniqueName: \"kubernetes.io/projected/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d-kube-api-access-kq552\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.377992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.379773 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.382550 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.406129 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"dnsmasq-dns-8554648995-72z27\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.426241 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.429279 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433195 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433476 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433768 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wpfh5" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433849 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.433954 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580488 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-config\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580565 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580598 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-scripts\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580677 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvbf\" (UniqueName: \"kubernetes.io/projected/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-kube-api-access-vpvbf\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580713 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580766 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.580927 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.588463 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682278 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682402 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-config\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682435 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682466 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-scripts\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682498 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvbf\" (UniqueName: \"kubernetes.io/projected/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-kube-api-access-vpvbf\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.682525 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.683665 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.683856 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-scripts\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.683886 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-config\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.688339 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.688646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.693613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.704384 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvbf\" (UniqueName: \"kubernetes.io/projected/e86681f0-5ba9-45f2-b0b7-0b9a49dc6706-kube-api-access-vpvbf\") pod \"ovn-northd-0\" (UID: \"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706\") " pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.757018 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 10:29:19 crc kubenswrapper[4984]: W0130 10:29:19.760183 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbcc0b77_42fd_47ec_9b91_94e2c070c0ec.slice/crio-384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004 WatchSource:0}: Error finding container 384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004: Status 404 returned error can't find the container with id 384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004 Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.760510 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ms66d"] Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.824434 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:19 crc kubenswrapper[4984]: W0130 10:29:19.832744 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod364f1e33_f14a_4248_82d5_eca3ab3e36c3.slice/crio-6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c WatchSource:0}: Error finding container 6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c: Status 404 returned error can't find the container with id 6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.988651 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" event={"ID":"f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d","Type":"ContainerDied","Data":"9701a6d0c8833a2ed95e4f25be768f5acc4984ff9b39575904813d239a3c6ae6"} Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.988925 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.991870 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ms66d" event={"ID":"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec","Type":"ContainerStarted","Data":"384aa0352eee015703246bec88a26b2560927adee3b59b87a41d8f3d9abb9004"} Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.994321 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerStarted","Data":"6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c"} Jan 30 10:29:19 crc kubenswrapper[4984]: I0130 10:29:19.994971 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" containerID="cri-o://68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" gracePeriod=10 Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.029486 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:20 crc kubenswrapper[4984]: W0130 10:29:20.079037 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8fd5e7_478c_498f_b9a6_5ad836cf08fa.slice/crio-619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46 WatchSource:0}: Error finding container 619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46: Status 404 returned error can't find the container with id 619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46 Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.220125 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 10:29:20 crc kubenswrapper[4984]: W0130 10:29:20.246168 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86681f0_5ba9_45f2_b0b7_0b9a49dc6706.slice/crio-76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3 WatchSource:0}: Error finding container 76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3: Status 404 returned error can't find the container with id 76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3 Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.450560 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.497623 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") pod \"8939f3c8-3f71-4369-b30a-1ce52517ec33\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.497799 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") pod \"8939f3c8-3f71-4369-b30a-1ce52517ec33\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.497862 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") pod \"8939f3c8-3f71-4369-b30a-1ce52517ec33\" (UID: \"8939f3c8-3f71-4369-b30a-1ce52517ec33\") " Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.501894 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f" (OuterVolumeSpecName: "kube-api-access-x5v5f") pod "8939f3c8-3f71-4369-b30a-1ce52517ec33" (UID: "8939f3c8-3f71-4369-b30a-1ce52517ec33"). InnerVolumeSpecName "kube-api-access-x5v5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.542602 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8939f3c8-3f71-4369-b30a-1ce52517ec33" (UID: "8939f3c8-3f71-4369-b30a-1ce52517ec33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.548974 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config" (OuterVolumeSpecName: "config") pod "8939f3c8-3f71-4369-b30a-1ce52517ec33" (UID: "8939f3c8-3f71-4369-b30a-1ce52517ec33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.599379 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5v5f\" (UniqueName: \"kubernetes.io/projected/8939f3c8-3f71-4369-b30a-1ce52517ec33-kube-api-access-x5v5f\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.599485 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:20 crc kubenswrapper[4984]: I0130 10:29:20.600690 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8939f3c8-3f71-4369-b30a-1ce52517ec33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.005403 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706","Type":"ContainerStarted","Data":"76c9744d1f0799131e2bfc5cdde2157d2d8cddb6d665518555edb52dd814d2c3"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.007865 4984 generic.go:334] "Generic (PLEG): container finished" podID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerID="f6e1b34b25853d4897e5be9605b68bf7a00d7cdea46fe63f164fe4950f791a05" exitCode=0 Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.007989 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerDied","Data":"f6e1b34b25853d4897e5be9605b68bf7a00d7cdea46fe63f164fe4950f791a05"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010471 4984 generic.go:334] "Generic (PLEG): container finished" podID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" exitCode=0 Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010517 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerDied","Data":"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010600 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nt5m5" event={"ID":"8939f3c8-3f71-4369-b30a-1ce52517ec33","Type":"ContainerDied","Data":"ca1bcc36301121abb73f0c33aefe22f5b51a7c08d43dfde31a9e2410ca9c91c2"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.010623 4984 scope.go:117] "RemoveContainer" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.012353 4984 generic.go:334] "Generic (PLEG): container finished" podID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" exitCode=0 Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.012436 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerDied","Data":"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.012472 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerStarted","Data":"619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.016296 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ms66d" event={"ID":"dbcc0b77-42fd-47ec-9b91-94e2c070c0ec","Type":"ContainerStarted","Data":"07a4855467dd1c7eda641750a5733aa89bf776673d9119c09c08bdeced73253e"} Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.059096 4984 scope.go:117] "RemoveContainer" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.134328 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.174871 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ms66d" podStartSLOduration=3.174848456 podStartE2EDuration="3.174848456s" podCreationTimestamp="2026-01-30 10:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:21.149003998 +0000 UTC m=+1065.715307822" watchObservedRunningTime="2026-01-30 10:29:21.174848456 +0000 UTC m=+1065.741152280" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.188881 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nt5m5"] Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.309659 4984 scope.go:117] "RemoveContainer" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" Jan 30 10:29:21 crc kubenswrapper[4984]: E0130 10:29:21.314472 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea\": container with ID starting with 68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea not found: ID does not exist" containerID="68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.314529 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea"} err="failed to get container status \"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea\": rpc error: code = NotFound desc = could not find container \"68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea\": container with ID starting with 68578dd3fab5707fc30dcbdc5b9266791c9d8b795fa70ecae7daea811bedf0ea not found: ID does not exist" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.314559 4984 scope.go:117] "RemoveContainer" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" Jan 30 10:29:21 crc kubenswrapper[4984]: E0130 10:29:21.316762 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf\": container with ID starting with 757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf not found: ID does not exist" containerID="757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf" Jan 30 10:29:21 crc kubenswrapper[4984]: I0130 10:29:21.316782 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf"} err="failed to get container status \"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf\": rpc error: code = NotFound desc = could not find container \"757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf\": container with ID starting with 757b61e0f05a5755d8097a1cb1ec3ec1871a52f339f8b75811b58800d74a24bf not found: ID does not exist" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.027015 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerStarted","Data":"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934"} Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.027356 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.034545 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706","Type":"ContainerStarted","Data":"db62640365b168549f02a4fb4a3c9c3411cf7ca839ca1cf4ad02f485e4d98bf7"} Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.037196 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerStarted","Data":"2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4"} Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.075568 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" podStartSLOduration=4.075546178 podStartE2EDuration="4.075546178s" podCreationTimestamp="2026-01-30 10:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:22.073538864 +0000 UTC m=+1066.639842698" watchObservedRunningTime="2026-01-30 10:29:22.075546178 +0000 UTC m=+1066.641850002" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.075671 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-72z27" podStartSLOduration=3.075665821 podStartE2EDuration="3.075665821s" podCreationTimestamp="2026-01-30 10:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:22.055820125 +0000 UTC m=+1066.622123949" watchObservedRunningTime="2026-01-30 10:29:22.075665821 +0000 UTC m=+1066.641969645" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.103179 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" path="/var/lib/kubelet/pods/8939f3c8-3f71-4369-b30a-1ce52517ec33/volumes" Jan 30 10:29:22 crc kubenswrapper[4984]: I0130 10:29:22.113380 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 10:29:23 crc kubenswrapper[4984]: I0130 10:29:23.048331 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e86681f0-5ba9-45f2-b0b7-0b9a49dc6706","Type":"ContainerStarted","Data":"f2f18e96c846ab218b03d6e8ab1b11093c2d863bf621adabc996f658029479a6"} Jan 30 10:29:23 crc kubenswrapper[4984]: I0130 10:29:23.048809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:23 crc kubenswrapper[4984]: I0130 10:29:23.072556 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.560899039 podStartE2EDuration="4.072535622s" podCreationTimestamp="2026-01-30 10:29:19 +0000 UTC" firstStartedPulling="2026-01-30 10:29:20.249752975 +0000 UTC m=+1064.816056799" lastFinishedPulling="2026-01-30 10:29:21.761389518 +0000 UTC m=+1066.327693382" observedRunningTime="2026-01-30 10:29:23.066668904 +0000 UTC m=+1067.632972738" watchObservedRunningTime="2026-01-30 10:29:23.072535622 +0000 UTC m=+1067.638839466" Jan 30 10:29:24 crc kubenswrapper[4984]: I0130 10:29:24.055727 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 10:29:25 crc kubenswrapper[4984]: I0130 10:29:25.338393 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 10:29:25 crc kubenswrapper[4984]: I0130 10:29:25.338465 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 10:29:25 crc kubenswrapper[4984]: I0130 10:29:25.432642 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.173186 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.670420 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.671599 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.720953 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:29:26 crc kubenswrapper[4984]: E0130 10:29:26.721463 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.721490 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" Jan 30 10:29:26 crc kubenswrapper[4984]: E0130 10:29:26.721549 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="init" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.721562 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="init" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.721874 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8939f3c8-3f71-4369-b30a-1ce52517ec33" containerName="dnsmasq-dns" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.722798 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.729239 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.731839 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.796308 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.797212 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.799730 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.824125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.824289 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925680 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.925797 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.926381 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.936434 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:26 crc kubenswrapper[4984]: I0130 10:29:26.956873 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"keystone-6746-account-create-update-clg9v\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.026898 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.026987 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.027840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.032182 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.033126 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.042447 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.043594 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.046982 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.048014 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.052763 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"keystone-db-create-jjssn\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.058617 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.071572 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.121124 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128227 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128389 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.128726 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.157511 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230454 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230602 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230642 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.230672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.231622 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.232848 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.255866 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"placement-db-create-4q2ws\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.257044 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"placement-f26c-account-create-update-7p7pm\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.348860 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.391312 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.546288 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.550227 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd118357_c4bf_43ef_a738_9fcd6b07aac4.slice/crio-40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265 WatchSource:0}: Error finding container 40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265: Status 404 returned error can't find the container with id 40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265 Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.659520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.660565 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode83eb734_fae0_40ac_85db_8f8c8fb26133.slice/crio-bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9 WatchSource:0}: Error finding container bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9: Status 404 returned error can't find the container with id bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9 Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.850030 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.854440 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c89dde7_c492_44dd_b36c_571540039b30.slice/crio-ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137 WatchSource:0}: Error finding container ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137: Status 404 returned error can't find the container with id ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137 Jan 30 10:29:27 crc kubenswrapper[4984]: I0130 10:29:27.928684 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:29:27 crc kubenswrapper[4984]: W0130 10:29:27.985376 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dd7bd77_9e19_4ad1_9711_e0290f74afa8.slice/crio-a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577 WatchSource:0}: Error finding container a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577: Status 404 returned error can't find the container with id a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577 Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.088112 4984 generic.go:334] "Generic (PLEG): container finished" podID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerID="990b9baffd84708013a7a3ee4aa2247425d308cfa8107b4fdee81cf4fe0b11dc" exitCode=0 Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.088217 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6746-account-create-update-clg9v" event={"ID":"dd118357-c4bf-43ef-a738-9fcd6b07aac4","Type":"ContainerDied","Data":"990b9baffd84708013a7a3ee4aa2247425d308cfa8107b4fdee81cf4fe0b11dc"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.088315 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6746-account-create-update-clg9v" event={"ID":"dd118357-c4bf-43ef-a738-9fcd6b07aac4","Type":"ContainerStarted","Data":"40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.092328 4984 generic.go:334] "Generic (PLEG): container finished" podID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerID="70e9112a74a7aadc96357a6c30b6f274f66b33e88559a27a17cb48d3251c7fbb" exitCode=0 Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.107956 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f26c-account-create-update-7p7pm" event={"ID":"0dd7bd77-9e19-4ad1-9711-e0290f74afa8","Type":"ContainerStarted","Data":"a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.108032 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4q2ws" event={"ID":"4c89dde7-c492-44dd-b36c-571540039b30","Type":"ContainerStarted","Data":"ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.108074 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jjssn" event={"ID":"e83eb734-fae0-40ac-85db-8f8c8fb26133","Type":"ContainerDied","Data":"70e9112a74a7aadc96357a6c30b6f274f66b33e88559a27a17cb48d3251c7fbb"} Jan 30 10:29:28 crc kubenswrapper[4984]: I0130 10:29:28.108094 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jjssn" event={"ID":"e83eb734-fae0-40ac-85db-8f8c8fb26133","Type":"ContainerStarted","Data":"bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9"} Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.103619 4984 generic.go:334] "Generic (PLEG): container finished" podID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerID="b43c0631539e8d8618d4ae2280e84e9cef0ad9ab61a9b8d7dfd994b58ac2994b" exitCode=0 Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.103706 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f26c-account-create-update-7p7pm" event={"ID":"0dd7bd77-9e19-4ad1-9711-e0290f74afa8","Type":"ContainerDied","Data":"b43c0631539e8d8618d4ae2280e84e9cef0ad9ab61a9b8d7dfd994b58ac2994b"} Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.113266 4984 generic.go:334] "Generic (PLEG): container finished" podID="4c89dde7-c492-44dd-b36c-571540039b30" containerID="73182d3db897a608122b23320455311eced5f1e7bb5cd0d6aaf0f4d8d9abd5cb" exitCode=0 Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.113443 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4q2ws" event={"ID":"4c89dde7-c492-44dd-b36c-571540039b30","Type":"ContainerDied","Data":"73182d3db897a608122b23320455311eced5f1e7bb5cd0d6aaf0f4d8d9abd5cb"} Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.198214 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.326215 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.326466 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-72z27" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" containerID="cri-o://c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" gracePeriod=10 Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.328465 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.395967 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.397490 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.446361 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495841 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495884 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495932 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.495993 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.590564 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-72z27" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601021 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601075 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601140 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601161 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.601205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.602008 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.602593 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.603303 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.603780 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.688215 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"dnsmasq-dns-b8fbc5445-b9djm\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.795218 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.806681 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") pod \"e83eb734-fae0-40ac-85db-8f8c8fb26133\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.806764 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") pod \"e83eb734-fae0-40ac-85db-8f8c8fb26133\" (UID: \"e83eb734-fae0-40ac-85db-8f8c8fb26133\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.807644 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e83eb734-fae0-40ac-85db-8f8c8fb26133" (UID: "e83eb734-fae0-40ac-85db-8f8c8fb26133"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.811945 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk" (OuterVolumeSpecName: "kube-api-access-wsklk") pod "e83eb734-fae0-40ac-85db-8f8c8fb26133" (UID: "e83eb734-fae0-40ac-85db-8f8c8fb26133"). InnerVolumeSpecName "kube-api-access-wsklk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.877424 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.881372 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908157 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") pod \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908278 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") pod \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\" (UID: \"dd118357-c4bf-43ef-a738-9fcd6b07aac4\") " Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908669 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83eb734-fae0-40ac-85db-8f8c8fb26133-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.908688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsklk\" (UniqueName: \"kubernetes.io/projected/e83eb734-fae0-40ac-85db-8f8c8fb26133-kube-api-access-wsklk\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.910912 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd118357-c4bf-43ef-a738-9fcd6b07aac4" (UID: "dd118357-c4bf-43ef-a738-9fcd6b07aac4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:29 crc kubenswrapper[4984]: I0130 10:29:29.937590 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p" (OuterVolumeSpecName: "kube-api-access-nwk8p") pod "dd118357-c4bf-43ef-a738-9fcd6b07aac4" (UID: "dd118357-c4bf-43ef-a738-9fcd6b07aac4"). InnerVolumeSpecName "kube-api-access-nwk8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.010220 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd118357-c4bf-43ef-a738-9fcd6b07aac4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.010727 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwk8p\" (UniqueName: \"kubernetes.io/projected/dd118357-c4bf-43ef-a738-9fcd6b07aac4-kube-api-access-nwk8p\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.058330 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119582 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119707 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119797 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119847 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.119900 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") pod \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\" (UID: \"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.131020 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t" (OuterVolumeSpecName: "kube-api-access-tw55t") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "kube-api-access-tw55t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.138697 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jjssn" event={"ID":"e83eb734-fae0-40ac-85db-8f8c8fb26133","Type":"ContainerDied","Data":"bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.138742 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcac3d6f5a75e5e772d4b1f2be4daf36412d4e13b6c0c44aec56a2dd23d61bd9" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.138808 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jjssn" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.154931 4984 generic.go:334] "Generic (PLEG): container finished" podID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" exitCode=0 Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155022 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-72z27" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155062 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerDied","Data":"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155098 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-72z27" event={"ID":"8c8fd5e7-478c-498f-b9a6-5ad836cf08fa","Type":"ContainerDied","Data":"619df733c9475736e4b6657287ca3072282f878b21d11d287f81458a8846cc46"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.155122 4984 scope.go:117] "RemoveContainer" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.165619 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6746-account-create-update-clg9v" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.166805 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6746-account-create-update-clg9v" event={"ID":"dd118357-c4bf-43ef-a738-9fcd6b07aac4","Type":"ContainerDied","Data":"40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265"} Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.166845 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40400c7266ede8ae15f27869a3a1263b46af09d2680c8299bd6b876586943265" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.188474 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config" (OuterVolumeSpecName: "config") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.194718 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.201786 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222473 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222507 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222521 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw55t\" (UniqueName: \"kubernetes.io/projected/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-kube-api-access-tw55t\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.222536 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.229065 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" (UID: "8c8fd5e7-478c-498f-b9a6-5ad836cf08fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.269672 4984 scope.go:117] "RemoveContainer" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.305889 4984 scope.go:117] "RemoveContainer" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.308883 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934\": container with ID starting with c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934 not found: ID does not exist" containerID="c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.308929 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934"} err="failed to get container status \"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934\": rpc error: code = NotFound desc = could not find container \"c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934\": container with ID starting with c440a5f7050308f78044357e6d27c4a835a9cf919f34bca8b83dc88295f11934 not found: ID does not exist" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.308963 4984 scope.go:117] "RemoveContainer" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.309346 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d\": container with ID starting with 524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d not found: ID does not exist" containerID="524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.309390 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d"} err="failed to get container status \"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d\": rpc error: code = NotFound desc = could not find container \"524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d\": container with ID starting with 524992d2cc77a59fafbbc97bf7dd6c10160da2d2081d96b5610d74d9e180e13d not found: ID does not exist" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.324647 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.370994 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:29:30 crc kubenswrapper[4984]: W0130 10:29:30.371628 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3333aa79_f6c6_4ae8_9b45_233127846dff.slice/crio-855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec WatchSource:0}: Error finding container 855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec: Status 404 returned error can't find the container with id 855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.598124 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.601358 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.604044 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-72z27"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.629268 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") pod \"4c89dde7-c492-44dd-b36c-571540039b30\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.629305 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") pod \"4c89dde7-c492-44dd-b36c-571540039b30\" (UID: \"4c89dde7-c492-44dd-b36c-571540039b30\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.631419 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.632071 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c89dde7-c492-44dd-b36c-571540039b30" (UID: "4c89dde7-c492-44dd-b36c-571540039b30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633630 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="init" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633655 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="init" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633690 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633697 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633712 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633718 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633729 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c89dde7-c492-44dd-b36c-571540039b30" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633735 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c89dde7-c492-44dd-b36c-571540039b30" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.633753 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerName="mariadb-account-create-update" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633760 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerName="mariadb-account-create-update" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633900 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" containerName="mariadb-account-create-update" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633914 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633923 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c89dde7-c492-44dd-b36c-571540039b30" containerName="mariadb-database-create" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.633935 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" containerName="dnsmasq-dns" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.638402 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641061 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641313 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dcrnm" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641733 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.641913 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.655405 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j" (OuterVolumeSpecName: "kube-api-access-s4s2j") pod "4c89dde7-c492-44dd-b36c-571540039b30" (UID: "4c89dde7-c492-44dd-b36c-571540039b30"). InnerVolumeSpecName "kube-api-access-s4s2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.660872 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.700495 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.730710 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") pod \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.730814 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") pod \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\" (UID: \"0dd7bd77-9e19-4ad1-9711-e0290f74afa8\") " Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.732609 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dd7bd77-9e19-4ad1-9711-e0290f74afa8" (UID: "0dd7bd77-9e19-4ad1-9711-e0290f74afa8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735290 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqw5c\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-kube-api-access-pqw5c\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735328 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-lock\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735422 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-cache\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735582 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735729 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735749 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c89dde7-c492-44dd-b36c-571540039b30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.735761 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4s2j\" (UniqueName: \"kubernetes.io/projected/4c89dde7-c492-44dd-b36c-571540039b30-kube-api-access-s4s2j\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.736867 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv" (OuterVolumeSpecName: "kube-api-access-w8kzv") pod "0dd7bd77-9e19-4ad1-9711-e0290f74afa8" (UID: "0dd7bd77-9e19-4ad1-9711-e0290f74afa8"). InnerVolumeSpecName "kube-api-access-w8kzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837406 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837455 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-cache\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837532 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837562 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837598 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqw5c\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-kube-api-access-pqw5c\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837618 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-lock\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.837658 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8kzv\" (UniqueName: \"kubernetes.io/projected/0dd7bd77-9e19-4ad1-9711-e0290f74afa8-kube-api-access-w8kzv\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.839159 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.839193 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.839204 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: E0130 10:29:30.839670 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:31.339238939 +0000 UTC m=+1075.905542763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.840507 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-lock\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.840780 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-cache\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.845664 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.856940 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqw5c\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-kube-api-access-pqw5c\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:30 crc kubenswrapper[4984]: I0130 10:29:30.877711 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.119108 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.119885 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerName="mariadb-account-create-update" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.119916 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerName="mariadb-account-create-update" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.120145 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" containerName="mariadb-account-create-update" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.120778 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.122875 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.125793 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.128038 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142188 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142374 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142455 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142496 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142538 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142563 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.142629 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.155051 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.155813 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-ztr94 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-bww49" podUID="b37def03-aa60-444f-b361-08f97aa07211" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.164591 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j9rvs"] Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.165607 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.176435 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.185439 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f26c-account-create-update-7p7pm" event={"ID":"0dd7bd77-9e19-4ad1-9711-e0290f74afa8","Type":"ContainerDied","Data":"a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.185479 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b7909fcafc5f52770cce69ef044d8bf6822ec9f035b561a003b9f20ed25577" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.185535 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f26c-account-create-update-7p7pm" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.188502 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4q2ws" event={"ID":"4c89dde7-c492-44dd-b36c-571540039b30","Type":"ContainerDied","Data":"ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.188545 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2ec52e5ce288ec6db926236a59da590e4fe3e477e7f6d3b1a9d22cc7df2137" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.188630 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4q2ws" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.190419 4984 generic.go:334] "Generic (PLEG): container finished" podID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerID="f04515d06093bea0006457a33fcd2dff143369d8a73d4cfd520b13fb1b93624f" exitCode=0 Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.190487 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.191659 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerDied","Data":"f04515d06093bea0006457a33fcd2dff143369d8a73d4cfd520b13fb1b93624f"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.191693 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerStarted","Data":"855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec"} Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.192686 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j9rvs"] Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.207640 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243676 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243736 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243766 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243802 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243862 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243893 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243924 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.243951 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244031 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244074 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244191 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244223 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.244720 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.245223 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.245613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.248122 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.248361 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.249766 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.266131 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"swift-ring-rebalance-bww49\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345580 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345720 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345766 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345804 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345962 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.345988 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") pod \"b37def03-aa60-444f-b361-08f97aa07211\" (UID: \"b37def03-aa60-444f-b361-08f97aa07211\") " Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346179 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346230 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346269 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346299 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346355 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346402 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346420 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346442 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.346840 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts" (OuterVolumeSpecName: "scripts") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.347067 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.347662 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.347746 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.348052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.348509 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.348529 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:31 crc kubenswrapper[4984]: E0130 10:29:31.348579 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:32.348562544 +0000 UTC m=+1076.914866458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.349182 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.350866 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.350872 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.351672 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94" (OuterVolumeSpecName: "kube-api-access-ztr94") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "kube-api-access-ztr94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.352446 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.353469 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.356442 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.360964 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b37def03-aa60-444f-b361-08f97aa07211" (UID: "b37def03-aa60-444f-b361-08f97aa07211"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.371127 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"swift-ring-rebalance-j9rvs\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.448656 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449068 4984 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449205 4984 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449414 4984 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b37def03-aa60-444f-b361-08f97aa07211-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449540 4984 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b37def03-aa60-444f-b361-08f97aa07211-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449669 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztr94\" (UniqueName: \"kubernetes.io/projected/b37def03-aa60-444f-b361-08f97aa07211-kube-api-access-ztr94\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.449792 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b37def03-aa60-444f-b361-08f97aa07211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:31 crc kubenswrapper[4984]: I0130 10:29:31.491784 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.009780 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j9rvs"] Jan 30 10:29:32 crc kubenswrapper[4984]: W0130 10:29:32.014462 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cfe4feb_b1bb_4904_9955_c5833ef34e9e.slice/crio-fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3 WatchSource:0}: Error finding container fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3: Status 404 returned error can't find the container with id fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3 Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.099739 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8fd5e7-478c-498f-b9a6-5ad836cf08fa" path="/var/lib/kubelet/pods/8c8fd5e7-478c-498f-b9a6-5ad836cf08fa/volumes" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.199128 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerStarted","Data":"ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0"} Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.199856 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.200225 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerStarted","Data":"fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3"} Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.200296 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bww49" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.228725 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podStartSLOduration=3.22868238 podStartE2EDuration="3.22868238s" podCreationTimestamp="2026-01-30 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:32.224044224 +0000 UTC m=+1076.790348058" watchObservedRunningTime="2026-01-30 10:29:32.22868238 +0000 UTC m=+1076.794986204" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.293658 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.307859 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-bww49"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.328832 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.329883 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.335106 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.366033 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.366099 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.366156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:32 crc kubenswrapper[4984]: E0130 10:29:32.366278 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:32 crc kubenswrapper[4984]: E0130 10:29:32.366290 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:32 crc kubenswrapper[4984]: E0130 10:29:32.366325 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:34.366311749 +0000 UTC m=+1078.932615573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.397974 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.398915 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.400783 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.407051 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468316 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468411 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468584 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.468638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.469124 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.491915 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"glance-db-create-mwcqt\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.569978 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.570122 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.570994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.585344 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"glance-064a-account-create-update-8lxkv\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.645785 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:32 crc kubenswrapper[4984]: I0130 10:29:32.711288 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.202840 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:29:33 crc kubenswrapper[4984]: W0130 10:29:33.212760 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83c0dd46_b897_468f_87a0_a335dd8fd6d5.slice/crio-d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3 WatchSource:0}: Error finding container d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3: Status 404 returned error can't find the container with id d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3 Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.289624 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:29:33 crc kubenswrapper[4984]: W0130 10:29:33.293602 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849571b4_26bb_4853_af9c_f717967dea41.slice/crio-c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f WatchSource:0}: Error finding container c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f: Status 404 returned error can't find the container with id c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.985623 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.987229 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:33 crc kubenswrapper[4984]: I0130 10:29:33.991693 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:33.998335 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:33.998695 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:33.999116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.100194 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.100400 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.100907 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37def03-aa60-444f-b361-08f97aa07211" path="/var/lib/kubelet/pods/b37def03-aa60-444f-b361-08f97aa07211/volumes" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.101493 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.123610 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"root-account-create-update-wr78c\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.218894 4984 generic.go:334] "Generic (PLEG): container finished" podID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerID="4e36e53c2881a6f73654429fc80824078411a297a7acc1ff57eb163eb773e0f9" exitCode=0 Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.218989 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mwcqt" event={"ID":"83c0dd46-b897-468f-87a0-a335dd8fd6d5","Type":"ContainerDied","Data":"4e36e53c2881a6f73654429fc80824078411a297a7acc1ff57eb163eb773e0f9"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.219306 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mwcqt" event={"ID":"83c0dd46-b897-468f-87a0-a335dd8fd6d5","Type":"ContainerStarted","Data":"d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.220924 4984 generic.go:334] "Generic (PLEG): container finished" podID="849571b4-26bb-4853-af9c-f717967dea41" containerID="3be32fd131009048bc81a0d4461ef13892f209f53fa5bcf3e5c232baa45cfcc2" exitCode=0 Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.221927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-064a-account-create-update-8lxkv" event={"ID":"849571b4-26bb-4853-af9c-f717967dea41","Type":"ContainerDied","Data":"3be32fd131009048bc81a0d4461ef13892f209f53fa5bcf3e5c232baa45cfcc2"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.221963 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-064a-account-create-update-8lxkv" event={"ID":"849571b4-26bb-4853-af9c-f717967dea41","Type":"ContainerStarted","Data":"c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f"} Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.324551 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:34 crc kubenswrapper[4984]: I0130 10:29:34.404898 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:34 crc kubenswrapper[4984]: E0130 10:29:34.405148 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:34 crc kubenswrapper[4984]: E0130 10:29:34.405189 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:34 crc kubenswrapper[4984]: E0130 10:29:34.405308 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:38.405230891 +0000 UTC m=+1082.971534715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.077932 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.085445 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.135873 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") pod \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.135928 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") pod \"849571b4-26bb-4853-af9c-f717967dea41\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.135950 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") pod \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\" (UID: \"83c0dd46-b897-468f-87a0-a335dd8fd6d5\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.136005 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") pod \"849571b4-26bb-4853-af9c-f717967dea41\" (UID: \"849571b4-26bb-4853-af9c-f717967dea41\") " Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.138558 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83c0dd46-b897-468f-87a0-a335dd8fd6d5" (UID: "83c0dd46-b897-468f-87a0-a335dd8fd6d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.138617 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "849571b4-26bb-4853-af9c-f717967dea41" (UID: "849571b4-26bb-4853-af9c-f717967dea41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.143415 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn" (OuterVolumeSpecName: "kube-api-access-5whqn") pod "849571b4-26bb-4853-af9c-f717967dea41" (UID: "849571b4-26bb-4853-af9c-f717967dea41"). InnerVolumeSpecName "kube-api-access-5whqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.144653 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd" (OuterVolumeSpecName: "kube-api-access-j2bcd") pod "83c0dd46-b897-468f-87a0-a335dd8fd6d5" (UID: "83c0dd46-b897-468f-87a0-a335dd8fd6d5"). InnerVolumeSpecName "kube-api-access-j2bcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238181 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2bcd\" (UniqueName: \"kubernetes.io/projected/83c0dd46-b897-468f-87a0-a335dd8fd6d5-kube-api-access-j2bcd\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238522 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/849571b4-26bb-4853-af9c-f717967dea41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238536 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83c0dd46-b897-468f-87a0-a335dd8fd6d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.238548 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whqn\" (UniqueName: \"kubernetes.io/projected/849571b4-26bb-4853-af9c-f717967dea41-kube-api-access-5whqn\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.243597 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.244475 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mwcqt" event={"ID":"83c0dd46-b897-468f-87a0-a335dd8fd6d5","Type":"ContainerDied","Data":"d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3"} Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.244542 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2dbcad9b89667a1f93860284157aa191480c5692d340d74bb12a190c60114a3" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.244496 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mwcqt" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.246935 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-064a-account-create-update-8lxkv" event={"ID":"849571b4-26bb-4853-af9c-f717967dea41","Type":"ContainerDied","Data":"c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f"} Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.246980 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c041573e802cc014295a5bdbd03c48e774c4c82214455c6e7ce272883edab46f" Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.246981 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-064a-account-create-update-8lxkv" Jan 30 10:29:36 crc kubenswrapper[4984]: W0130 10:29:36.249629 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd4914ea_6a7b_47c6_abbd_b2a0a067361d.slice/crio-c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177 WatchSource:0}: Error finding container c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177: Status 404 returned error can't find the container with id c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177 Jan 30 10:29:36 crc kubenswrapper[4984]: I0130 10:29:36.256327 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.261624 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerStarted","Data":"77e36f10450b6786e128bf55e10097bc7a62dfbf1fdd1101184e36a5286381a6"} Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.267990 4984 generic.go:334] "Generic (PLEG): container finished" podID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerID="0789f4290dbcaeca5700757294aca052563ba0644765c2738bb82c817de460e2" exitCode=0 Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.268050 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wr78c" event={"ID":"dd4914ea-6a7b-47c6-abbd-b2a0a067361d","Type":"ContainerDied","Data":"0789f4290dbcaeca5700757294aca052563ba0644765c2738bb82c817de460e2"} Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.268082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wr78c" event={"ID":"dd4914ea-6a7b-47c6-abbd-b2a0a067361d","Type":"ContainerStarted","Data":"c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177"} Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.288722 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j9rvs" podStartSLOduration=2.468319862 podStartE2EDuration="6.288700648s" podCreationTimestamp="2026-01-30 10:29:31 +0000 UTC" firstStartedPulling="2026-01-30 10:29:32.017618996 +0000 UTC m=+1076.583922820" lastFinishedPulling="2026-01-30 10:29:35.837999782 +0000 UTC m=+1080.404303606" observedRunningTime="2026-01-30 10:29:37.286566891 +0000 UTC m=+1081.852870755" watchObservedRunningTime="2026-01-30 10:29:37.288700648 +0000 UTC m=+1081.855004472" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.560524 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:29:37 crc kubenswrapper[4984]: E0130 10:29:37.560905 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849571b4-26bb-4853-af9c-f717967dea41" containerName="mariadb-account-create-update" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.560922 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="849571b4-26bb-4853-af9c-f717967dea41" containerName="mariadb-account-create-update" Jan 30 10:29:37 crc kubenswrapper[4984]: E0130 10:29:37.560934 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerName="mariadb-database-create" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.560941 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerName="mariadb-database-create" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.561111 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="849571b4-26bb-4853-af9c-f717967dea41" containerName="mariadb-account-create-update" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.561134 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" containerName="mariadb-database-create" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.561680 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.564021 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.564197 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-94rmf" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.579593 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663437 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663506 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663532 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.663583 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764706 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.764848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.770386 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.770850 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.774370 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.799789 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"glance-db-sync-v95fj\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:37 crc kubenswrapper[4984]: I0130 10:29:37.879913 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.482495 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:38 crc kubenswrapper[4984]: E0130 10:29:38.483191 4984 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 10:29:38 crc kubenswrapper[4984]: E0130 10:29:38.483209 4984 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 10:29:38 crc kubenswrapper[4984]: E0130 10:29:38.483282 4984 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift podName:33b286d6-b58f-4d49-ae49-e3acdc77b7f5 nodeName:}" failed. No retries permitted until 2026-01-30 10:29:46.483260901 +0000 UTC m=+1091.049564725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift") pod "swift-storage-0" (UID: "33b286d6-b58f-4d49-ae49-e3acdc77b7f5") : configmap "swift-ring-files" not found Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.498947 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:29:38 crc kubenswrapper[4984]: W0130 10:29:38.563236 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfce8525_20d3_4c57_9638_37a46571c375.slice/crio-263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d WatchSource:0}: Error finding container 263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d: Status 404 returned error can't find the container with id 263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.645041 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.687674 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") pod \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.687908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") pod \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\" (UID: \"dd4914ea-6a7b-47c6-abbd-b2a0a067361d\") " Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.688527 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd4914ea-6a7b-47c6-abbd-b2a0a067361d" (UID: "dd4914ea-6a7b-47c6-abbd-b2a0a067361d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.696989 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq" (OuterVolumeSpecName: "kube-api-access-zlwlq") pod "dd4914ea-6a7b-47c6-abbd-b2a0a067361d" (UID: "dd4914ea-6a7b-47c6-abbd-b2a0a067361d"). InnerVolumeSpecName "kube-api-access-zlwlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.789945 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlwlq\" (UniqueName: \"kubernetes.io/projected/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-kube-api-access-zlwlq\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:38 crc kubenswrapper[4984]: I0130 10:29:38.789986 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4914ea-6a7b-47c6-abbd-b2a0a067361d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.285950 4984 generic.go:334] "Generic (PLEG): container finished" podID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerID="627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73" exitCode=0 Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.286023 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerDied","Data":"627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.288184 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wr78c" event={"ID":"dd4914ea-6a7b-47c6-abbd-b2a0a067361d","Type":"ContainerDied","Data":"c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.288215 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60b025f988e4d7af00109a32c8c4c981e913e5e3a77413a11a231d2fc21a177" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.288279 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wr78c" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.290049 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerStarted","Data":"263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.291852 4984 generic.go:334] "Generic (PLEG): container finished" podID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" exitCode=0 Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.291907 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerDied","Data":"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48"} Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.842875 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.883472 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.947168 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:39 crc kubenswrapper[4984]: I0130 10:29:39.947484 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" containerID="cri-o://2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4" gracePeriod=10 Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.308448 4984 generic.go:334] "Generic (PLEG): container finished" podID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerID="2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4" exitCode=0 Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.308507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerDied","Data":"2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4"} Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.310418 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerStarted","Data":"53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a"} Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.310589 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.323570 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerStarted","Data":"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83"} Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.323882 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.336528 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.704604969 podStartE2EDuration="58.336507716s" podCreationTimestamp="2026-01-30 10:28:42 +0000 UTC" firstStartedPulling="2026-01-30 10:28:49.479167662 +0000 UTC m=+1034.045471486" lastFinishedPulling="2026-01-30 10:29:05.111070409 +0000 UTC m=+1049.677374233" observedRunningTime="2026-01-30 10:29:40.33441277 +0000 UTC m=+1084.900716624" watchObservedRunningTime="2026-01-30 10:29:40.336507716 +0000 UTC m=+1084.902811550" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.367767 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.49467955 podStartE2EDuration="58.36774983s" podCreationTimestamp="2026-01-30 10:28:42 +0000 UTC" firstStartedPulling="2026-01-30 10:28:44.220227688 +0000 UTC m=+1028.786531512" lastFinishedPulling="2026-01-30 10:29:05.093297958 +0000 UTC m=+1049.659601792" observedRunningTime="2026-01-30 10:29:40.357978406 +0000 UTC m=+1084.924282230" watchObservedRunningTime="2026-01-30 10:29:40.36774983 +0000 UTC m=+1084.934053664" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.421870 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.441684 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wr78c"] Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.483304 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624447 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624538 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624624 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.624696 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") pod \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\" (UID: \"364f1e33-f14a-4248-82d5-eca3ab3e36c3\") " Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.635799 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q" (OuterVolumeSpecName: "kube-api-access-sp28q") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "kube-api-access-sp28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.666173 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config" (OuterVolumeSpecName: "config") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.674606 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.679794 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "364f1e33-f14a-4248-82d5-eca3ab3e36c3" (UID: "364f1e33-f14a-4248-82d5-eca3ab3e36c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.726977 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp28q\" (UniqueName: \"kubernetes.io/projected/364f1e33-f14a-4248-82d5-eca3ab3e36c3-kube-api-access-sp28q\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.727003 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.727012 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:40 crc kubenswrapper[4984]: I0130 10:29:40.727020 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/364f1e33-f14a-4248-82d5-eca3ab3e36c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.343379 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.343073 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-pb99t" event={"ID":"364f1e33-f14a-4248-82d5-eca3ab3e36c3","Type":"ContainerDied","Data":"6e626d6011bcc4c48e3533628b8c38012325767042b01e071526d1293a5fd60c"} Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.355588 4984 scope.go:117] "RemoveContainer" containerID="2d85115a6beaecff371abf3242d6f0452ea2b046f67b6ad4830a4a93f69a4de4" Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.383341 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.391069 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-pb99t"] Jan 30 10:29:41 crc kubenswrapper[4984]: I0130 10:29:41.400497 4984 scope.go:117] "RemoveContainer" containerID="f6e1b34b25853d4897e5be9605b68bf7a00d7cdea46fe63f164fe4950f791a05" Jan 30 10:29:42 crc kubenswrapper[4984]: I0130 10:29:42.101094 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" path="/var/lib/kubelet/pods/364f1e33-f14a-4248-82d5-eca3ab3e36c3/volumes" Jan 30 10:29:42 crc kubenswrapper[4984]: I0130 10:29:42.102919 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" path="/var/lib/kubelet/pods/dd4914ea-6a7b-47c6-abbd-b2a0a067361d/volumes" Jan 30 10:29:43 crc kubenswrapper[4984]: I0130 10:29:43.363746 4984 generic.go:334] "Generic (PLEG): container finished" podID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerID="77e36f10450b6786e128bf55e10097bc7a62dfbf1fdd1101184e36a5286381a6" exitCode=0 Jan 30 10:29:43 crc kubenswrapper[4984]: I0130 10:29:43.363833 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerDied","Data":"77e36f10450b6786e128bf55e10097bc7a62dfbf1fdd1101184e36a5286381a6"} Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003509 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:44 crc kubenswrapper[4984]: E0130 10:29:44.003906 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerName="mariadb-account-create-update" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003929 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerName="mariadb-account-create-update" Jan 30 10:29:44 crc kubenswrapper[4984]: E0130 10:29:44.003952 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003959 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" Jan 30 10:29:44 crc kubenswrapper[4984]: E0130 10:29:44.003976 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="init" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.003983 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="init" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.004185 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="364f1e33-f14a-4248-82d5-eca3ab3e36c3" containerName="dnsmasq-dns" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.004208 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4914ea-6a7b-47c6-abbd-b2a0a067361d" containerName="mariadb-account-create-update" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.005745 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.008563 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.030208 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.085570 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.085666 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.187503 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.187565 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.188264 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.221059 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"root-account-create-update-9mg8c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:44 crc kubenswrapper[4984]: I0130 10:29:44.336624 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:46 crc kubenswrapper[4984]: I0130 10:29:46.523142 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:46 crc kubenswrapper[4984]: I0130 10:29:46.541233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b286d6-b58f-4d49-ae49-e3acdc77b7f5-etc-swift\") pod \"swift-storage-0\" (UID: \"33b286d6-b58f-4d49-ae49-e3acdc77b7f5\") " pod="openstack/swift-storage-0" Jan 30 10:29:46 crc kubenswrapper[4984]: I0130 10:29:46.613116 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.575609 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.579370 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-js4wt" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.585236 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-m4spx" podUID="63184ee8-263b-4506-8844-4ae4fd2a80c7" containerName="ovn-controller" probeResult="failure" output=< Jan 30 10:29:47 crc kubenswrapper[4984]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 10:29:47 crc kubenswrapper[4984]: > Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.812139 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.813991 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.815849 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.828856 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.854942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.855013 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.855043 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.856154 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.856283 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.856348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957557 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957605 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957689 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957733 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.957765 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958506 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958522 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958600 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.958807 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.961712 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:47 crc kubenswrapper[4984]: I0130 10:29:47.979184 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"ovn-controller-m4spx-config-8pbt4\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:48 crc kubenswrapper[4984]: I0130 10:29:48.131526 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.019831 4984 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d] : Timed out while waiting for systemd to remove kubepods-besteffort-podf42e13a3_aadb_4dc7_aabb_5a769e2b0e2d.slice" Jan 30 10:29:50 crc kubenswrapper[4984]: E0130 10:29:50.019888 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf42e13a3-aadb-4dc7-aabb-5a769e2b0e2d] : Timed out while waiting for systemd to remove kubepods-besteffort-podf42e13a3_aadb_4dc7_aabb_5a769e2b0e2d.slice" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.354822 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.400122 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.400193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.400292 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402466 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402602 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402653 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.402697 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") pod \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\" (UID: \"7cfe4feb-b1bb-4904-9955-c5833ef34e9e\") " Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.404210 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.404892 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.406533 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf" (OuterVolumeSpecName: "kube-api-access-lrqkf") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "kube-api-access-lrqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.415743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432573 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts" (OuterVolumeSpecName: "scripts") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432676 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-22gp8" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432689 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j9rvs" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432730 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j9rvs" event={"ID":"7cfe4feb-b1bb-4904-9955-c5833ef34e9e","Type":"ContainerDied","Data":"fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3"} Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.432758 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd64f8c4477c971550800ceeb5036cabc0f50c059d9d7b94de3d3ea1745c8ab3" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.436382 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.446881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7cfe4feb-b1bb-4904-9955-c5833ef34e9e" (UID: "7cfe4feb-b1bb-4904-9955-c5833ef34e9e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505426 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrqkf\" (UniqueName: \"kubernetes.io/projected/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-kube-api-access-lrqkf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505673 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505741 4984 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505796 4984 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505858 4984 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505921 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.506028 4984 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfe4feb-b1bb-4904-9955-c5833ef34e9e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.505871 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.518737 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-22gp8"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.847163 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.882348 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 10:29:50 crc kubenswrapper[4984]: I0130 10:29:50.891155 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:50 crc kubenswrapper[4984]: W0130 10:29:50.901735 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod758d234e_dcc5_4555_9403_6afac762f662.slice/crio-b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0 WatchSource:0}: Error finding container b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0: Status 404 returned error can't find the container with id b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0 Jan 30 10:29:50 crc kubenswrapper[4984]: W0130 10:29:50.904842 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b286d6_b58f_4d49_ae49_e3acdc77b7f5.slice/crio-6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59 WatchSource:0}: Error finding container 6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59: Status 404 returned error can't find the container with id 6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59 Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.446979 4984 generic.go:334] "Generic (PLEG): container finished" podID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerID="f92bcc7f529c6d27eac4218b5f51170e776604565fbe8022a9769f8c3f32b9e1" exitCode=0 Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.447287 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9mg8c" event={"ID":"f8da6f39-d290-44c4-93cd-0b2fcc37e01c","Type":"ContainerDied","Data":"f92bcc7f529c6d27eac4218b5f51170e776604565fbe8022a9769f8c3f32b9e1"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.447484 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9mg8c" event={"ID":"f8da6f39-d290-44c4-93cd-0b2fcc37e01c","Type":"ContainerStarted","Data":"4ffe4213c6bb43b1ddfaa847d935f4f89ebdbc056f9c867b6ca149dde03bb34b"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.450894 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerStarted","Data":"b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.452416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"6dfbc5fa7a23be4904d742b7ff012afd21bda4bf23046fff0874658209a6bb59"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.454109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx-config-8pbt4" event={"ID":"758d234e-dcc5-4555-9403-6afac762f662","Type":"ContainerStarted","Data":"4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.454238 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx-config-8pbt4" event={"ID":"758d234e-dcc5-4555-9403-6afac762f662","Type":"ContainerStarted","Data":"b01c6d72b98eb84f2ae223dfc17866cf11304953dd264263427d25bc152586d0"} Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.490467 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-m4spx-config-8pbt4" podStartSLOduration=4.490445614 podStartE2EDuration="4.490445614s" podCreationTimestamp="2026-01-30 10:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:51.48290781 +0000 UTC m=+1096.049211654" watchObservedRunningTime="2026-01-30 10:29:51.490445614 +0000 UTC m=+1096.056749438" Jan 30 10:29:51 crc kubenswrapper[4984]: I0130 10:29:51.498571 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-v95fj" podStartSLOduration=2.7357691109999998 podStartE2EDuration="14.498552613s" podCreationTimestamp="2026-01-30 10:29:37 +0000 UTC" firstStartedPulling="2026-01-30 10:29:38.566519431 +0000 UTC m=+1083.132823255" lastFinishedPulling="2026-01-30 10:29:50.329302923 +0000 UTC m=+1094.895606757" observedRunningTime="2026-01-30 10:29:51.496314102 +0000 UTC m=+1096.062617936" watchObservedRunningTime="2026-01-30 10:29:51.498552613 +0000 UTC m=+1096.064856427" Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.098755 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d" path="/var/lib/kubelet/pods/f42e13a3-aadb-4dc7-aabb-5a769e2b0e2d/volumes" Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.467319 4984 generic.go:334] "Generic (PLEG): container finished" podID="758d234e-dcc5-4555-9403-6afac762f662" containerID="4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c" exitCode=0 Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.467418 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-m4spx-config-8pbt4" event={"ID":"758d234e-dcc5-4555-9403-6afac762f662","Type":"ContainerDied","Data":"4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c"} Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.603391 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-m4spx" Jan 30 10:29:52 crc kubenswrapper[4984]: I0130 10:29:52.980537 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.057817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") pod \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.057868 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") pod \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\" (UID: \"f8da6f39-d290-44c4-93cd-0b2fcc37e01c\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.058708 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8da6f39-d290-44c4-93cd-0b2fcc37e01c" (UID: "f8da6f39-d290-44c4-93cd-0b2fcc37e01c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.061036 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp" (OuterVolumeSpecName: "kube-api-access-pccfp") pod "f8da6f39-d290-44c4-93cd-0b2fcc37e01c" (UID: "f8da6f39-d290-44c4-93cd-0b2fcc37e01c"). InnerVolumeSpecName "kube-api-access-pccfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.161286 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pccfp\" (UniqueName: \"kubernetes.io/projected/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-kube-api-access-pccfp\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.161329 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8da6f39-d290-44c4-93cd-0b2fcc37e01c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.478327 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9mg8c" event={"ID":"f8da6f39-d290-44c4-93cd-0b2fcc37e01c","Type":"ContainerDied","Data":"4ffe4213c6bb43b1ddfaa847d935f4f89ebdbc056f9c867b6ca149dde03bb34b"} Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.480026 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffe4213c6bb43b1ddfaa847d935f4f89ebdbc056f9c867b6ca149dde03bb34b" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.478360 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9mg8c" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.486862 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"740d26b17d9ecc7d033ef1e735065ca6f39e638b98e546db4c60c2bd674ddcce"} Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.487105 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"0947a163fb316789a29764a1ead398a92cdde63adb51fd659a482380791f84ff"} Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.701429 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.756623 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.871387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.871476 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872422 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts" (OuterVolumeSpecName: "scripts") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872529 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872640 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872671 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.872747 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") pod \"758d234e-dcc5-4555-9403-6afac762f662\" (UID: \"758d234e-dcc5-4555-9403-6afac762f662\") " Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873407 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873464 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873825 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873859 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.873880 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run" (OuterVolumeSpecName: "var-run") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.877586 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v" (OuterVolumeSpecName: "kube-api-access-c2m9v") pod "758d234e-dcc5-4555-9403-6afac762f662" (UID: "758d234e-dcc5-4555-9403-6afac762f662"). InnerVolumeSpecName "kube-api-access-c2m9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974616 4984 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/758d234e-dcc5-4555-9403-6afac762f662-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974672 4984 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974684 4984 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974695 4984 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/758d234e-dcc5-4555-9403-6afac762f662-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:53 crc kubenswrapper[4984]: I0130 10:29:53.974707 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2m9v\" (UniqueName: \"kubernetes.io/projected/758d234e-dcc5-4555-9403-6afac762f662-kube-api-access-c2m9v\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.020240 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.027453 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.039036 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-m4spx-config-8pbt4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.045872 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:29:54 crc kubenswrapper[4984]: E0130 10:29:54.046221 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerName="swift-ring-rebalance" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046240 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerName="swift-ring-rebalance" Jan 30 10:29:54 crc kubenswrapper[4984]: E0130 10:29:54.046269 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="758d234e-dcc5-4555-9403-6afac762f662" containerName="ovn-config" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046276 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="758d234e-dcc5-4555-9403-6afac762f662" containerName="ovn-config" Jan 30 10:29:54 crc kubenswrapper[4984]: E0130 10:29:54.046291 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerName="mariadb-account-create-update" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046298 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerName="mariadb-account-create-update" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046462 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfe4feb-b1bb-4904-9955-c5833ef34e9e" containerName="swift-ring-rebalance" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046474 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="758d234e-dcc5-4555-9403-6afac762f662" containerName="ovn-config" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.046490 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" containerName="mariadb-account-create-update" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.047118 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.058281 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.077213 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.077325 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.122809 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="758d234e-dcc5-4555-9403-6afac762f662" path="/var/lib/kubelet/pods/758d234e-dcc5-4555-9403-6afac762f662/volumes" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.141105 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.143464 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.173754 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.180058 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.180167 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.181077 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.181092 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.182761 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.194516 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.200146 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.218793 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"barbican-db-create-bhbll\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.256998 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.258341 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.260793 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.267599 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281484 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281555 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281635 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.281696 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.340203 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.341275 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.343040 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.343972 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.344148 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.344428 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.355859 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.364628 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383175 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383305 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383357 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383384 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383460 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.383992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.384491 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.400490 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"cinder-db-create-p7n6d\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.402087 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"barbican-622f-account-create-update-xxrl4\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.445670 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.446862 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.464164 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.479657 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485278 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485338 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485390 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485510 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.485535 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.488817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.516899 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"39cba512d1d3d48fc987b4a47e0907fd75abdd7985bb95833c667d3883d6b6ba"} Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.516955 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"912cf51779605e2b4fce3df3f69aad596f18f154299af844d168631b86214f42"} Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.518121 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"cinder-8d9e-account-create-update-pv4gq\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.521650 4984 scope.go:117] "RemoveContainer" containerID="4ab9d6fed7ef2d8e83d431ecc5534b8ea6adabfca7f780aafcd9c78c76b8680c" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.521829 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-m4spx-config-8pbt4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.543141 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.544075 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.557724 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.557894 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.559896 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.579770 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586587 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586700 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586739 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586785 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.586815 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.593321 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.594220 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.610937 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"keystone-db-sync-whl8p\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.660426 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688720 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688792 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688846 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.688896 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.689582 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.709089 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"neutron-db-create-qtwt7\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.790381 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.792581 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.792824 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.793709 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.812392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"neutron-61ae-account-create-update-8l5nb\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.881602 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:54 crc kubenswrapper[4984]: I0130 10:29:54.942190 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.012374 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.067076 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.104648 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.272359 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.383320 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.533961 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerStarted","Data":"498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.534013 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.534045 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerStarted","Data":"bc4a3190c6737b7176126e0691adaa417a5283eba84a5b73be121b2b6188d4db"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.539122 4984 generic.go:334] "Generic (PLEG): container finished" podID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerID="2c715bd7c478626b0d30f0dcbe5f0fa4d9ddd3cebe540358d60fefd03ffbea4f" exitCode=0 Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.539189 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d9e-account-create-update-pv4gq" event={"ID":"c4f293b1-64af-45c3-8ee1-b8df7efdde3e","Type":"ContainerDied","Data":"2c715bd7c478626b0d30f0dcbe5f0fa4d9ddd3cebe540358d60fefd03ffbea4f"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.539255 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d9e-account-create-update-pv4gq" event={"ID":"c4f293b1-64af-45c3-8ee1-b8df7efdde3e","Type":"ContainerStarted","Data":"80ba735ba6e7de077f77b6d39e0d54cb7804bfa2f3e054f4623ac57daf82ec89"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.543774 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.545084 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerStarted","Data":"65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.545121 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerStarted","Data":"354c4d1205f62964b6c1a29a854be88a05fd0d3d7efd2997db8c40286434d404"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.546804 4984 generic.go:334] "Generic (PLEG): container finished" podID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerID="636c0d411532393965dbc0c85c0755158f7ef4a0555bad562fe1e96ce9c7b1be" exitCode=0 Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.546845 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bhbll" event={"ID":"341b21ee-dc5c-48f9-9810-85d1af9b9de9","Type":"ContainerDied","Data":"636c0d411532393965dbc0c85c0755158f7ef4a0555bad562fe1e96ce9c7b1be"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.546863 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bhbll" event={"ID":"341b21ee-dc5c-48f9-9810-85d1af9b9de9","Type":"ContainerStarted","Data":"7645b092ecc2ba054183fe5327481ec543ff04782028f5de9de88330b0160cf4"} Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.549507 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9mg8c"] Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.556755 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-622f-account-create-update-xxrl4" podStartSLOduration=1.5567343359999999 podStartE2EDuration="1.556734336s" podCreationTimestamp="2026-01-30 10:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:55.550537969 +0000 UTC m=+1100.116841793" watchObservedRunningTime="2026-01-30 10:29:55.556734336 +0000 UTC m=+1100.123038160" Jan 30 10:29:55 crc kubenswrapper[4984]: I0130 10:29:55.577454 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-p7n6d" podStartSLOduration=1.577434796 podStartE2EDuration="1.577434796s" podCreationTimestamp="2026-01-30 10:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:29:55.566152121 +0000 UTC m=+1100.132455945" watchObservedRunningTime="2026-01-30 10:29:55.577434796 +0000 UTC m=+1100.143738620" Jan 30 10:29:55 crc kubenswrapper[4984]: W0130 10:29:55.713538 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26267a37_c8e7_45b3_af7f_8050a58cb697.slice/crio-1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca WatchSource:0}: Error finding container 1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca: Status 404 returned error can't find the container with id 1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca Jan 30 10:29:55 crc kubenswrapper[4984]: W0130 10:29:55.715728 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5490b62_8700_4c9c_b4f7_517c71f91c46.slice/crio-05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033 WatchSource:0}: Error finding container 05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033: Status 404 returned error can't find the container with id 05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033 Jan 30 10:29:55 crc kubenswrapper[4984]: W0130 10:29:55.717610 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c1d730_34f1_4912_a0e9_f19d10e9ec9b.slice/crio-8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5 WatchSource:0}: Error finding container 8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5: Status 404 returned error can't find the container with id 8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.118373 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8da6f39-d290-44c4-93cd-0b2fcc37e01c" path="/var/lib/kubelet/pods/f8da6f39-d290-44c4-93cd-0b2fcc37e01c/volumes" Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568057 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"765fc8a037b5105c4ecd45c14e6820f91f9d4493414014d4375a3af65079e680"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568112 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"eb86366b30fc87a2dbae1fc21c1f057dc57f579f1f38e1b510294178a5410364"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568131 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"673d9fd9bff276cb6981367eba1c808558ba7a19bfd44840ba924414dda09b8b"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.568144 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"14d22a96a0ac507fec174fe47c4bae47ec6431d02fc94a795eaa928c481c1855"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.569724 4984 generic.go:334] "Generic (PLEG): container finished" podID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerID="e4a188d3d377fd9a910224b46c8bfca036c469e31163b866035741aa0bc79a21" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.570147 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qtwt7" event={"ID":"26267a37-c8e7-45b3-af7f-8050a58cb697","Type":"ContainerDied","Data":"e4a188d3d377fd9a910224b46c8bfca036c469e31163b866035741aa0bc79a21"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.570188 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qtwt7" event={"ID":"26267a37-c8e7-45b3-af7f-8050a58cb697","Type":"ContainerStarted","Data":"1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.571273 4984 generic.go:334] "Generic (PLEG): container finished" podID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerID="65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.571357 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerDied","Data":"65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.585583 4984 generic.go:334] "Generic (PLEG): container finished" podID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerID="f9f5f71df6bcff6e848630eab001a1a161d02735319888972af7604f9aa242ac" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.585647 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-61ae-account-create-update-8l5nb" event={"ID":"f5490b62-8700-4c9c-b4f7-517c71f91c46","Type":"ContainerDied","Data":"f9f5f71df6bcff6e848630eab001a1a161d02735319888972af7604f9aa242ac"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.585672 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-61ae-account-create-update-8l5nb" event={"ID":"f5490b62-8700-4c9c-b4f7-517c71f91c46","Type":"ContainerStarted","Data":"05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.595775 4984 generic.go:334] "Generic (PLEG): container finished" podID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerID="498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5" exitCode=0 Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.595982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerDied","Data":"498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5"} Jan 30 10:29:56 crc kubenswrapper[4984]: I0130 10:29:56.609710 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerStarted","Data":"8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5"} Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.006050 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.126955 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.128862 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") pod \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.128904 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") pod \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\" (UID: \"341b21ee-dc5c-48f9-9810-85d1af9b9de9\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.133784 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "341b21ee-dc5c-48f9-9810-85d1af9b9de9" (UID: "341b21ee-dc5c-48f9-9810-85d1af9b9de9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.148639 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s" (OuterVolumeSpecName: "kube-api-access-w948s") pod "341b21ee-dc5c-48f9-9810-85d1af9b9de9" (UID: "341b21ee-dc5c-48f9-9810-85d1af9b9de9"). InnerVolumeSpecName "kube-api-access-w948s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.231673 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") pod \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.232476 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") pod \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\" (UID: \"c4f293b1-64af-45c3-8ee1-b8df7efdde3e\") " Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.232984 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4f293b1-64af-45c3-8ee1-b8df7efdde3e" (UID: "c4f293b1-64af-45c3-8ee1-b8df7efdde3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.233317 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.233350 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/341b21ee-dc5c-48f9-9810-85d1af9b9de9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.233368 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w948s\" (UniqueName: \"kubernetes.io/projected/341b21ee-dc5c-48f9-9810-85d1af9b9de9-kube-api-access-w948s\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.234536 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5" (OuterVolumeSpecName: "kube-api-access-9b9m5") pod "c4f293b1-64af-45c3-8ee1-b8df7efdde3e" (UID: "c4f293b1-64af-45c3-8ee1-b8df7efdde3e"). InnerVolumeSpecName "kube-api-access-9b9m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.334688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b9m5\" (UniqueName: \"kubernetes.io/projected/c4f293b1-64af-45c3-8ee1-b8df7efdde3e-kube-api-access-9b9m5\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.622432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bhbll" event={"ID":"341b21ee-dc5c-48f9-9810-85d1af9b9de9","Type":"ContainerDied","Data":"7645b092ecc2ba054183fe5327481ec543ff04782028f5de9de88330b0160cf4"} Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.622474 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7645b092ecc2ba054183fe5327481ec543ff04782028f5de9de88330b0160cf4" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.622540 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bhbll" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.626801 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8d9e-account-create-update-pv4gq" Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.627388 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8d9e-account-create-update-pv4gq" event={"ID":"c4f293b1-64af-45c3-8ee1-b8df7efdde3e","Type":"ContainerDied","Data":"80ba735ba6e7de077f77b6d39e0d54cb7804bfa2f3e054f4623ac57daf82ec89"} Jan 30 10:29:57 crc kubenswrapper[4984]: I0130 10:29:57.627444 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ba735ba6e7de077f77b6d39e0d54cb7804bfa2f3e054f4623ac57daf82ec89" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.040037 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.145927 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") pod \"d291ef2c-2cdb-47be-b508-efd4c8282791\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.146024 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") pod \"d291ef2c-2cdb-47be-b508-efd4c8282791\" (UID: \"d291ef2c-2cdb-47be-b508-efd4c8282791\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.148139 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d291ef2c-2cdb-47be-b508-efd4c8282791" (UID: "d291ef2c-2cdb-47be-b508-efd4c8282791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.157442 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d" (OuterVolumeSpecName: "kube-api-access-5nn4d") pod "d291ef2c-2cdb-47be-b508-efd4c8282791" (UID: "d291ef2c-2cdb-47be-b508-efd4c8282791"). InnerVolumeSpecName "kube-api-access-5nn4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.195701 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.203230 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.211169 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.248489 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d291ef2c-2cdb-47be-b508-efd4c8282791-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.248524 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nn4d\" (UniqueName: \"kubernetes.io/projected/d291ef2c-2cdb-47be-b508-efd4c8282791-kube-api-access-5nn4d\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.349932 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") pod \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350513 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") pod \"f5490b62-8700-4c9c-b4f7-517c71f91c46\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350555 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") pod \"f5490b62-8700-4c9c-b4f7-517c71f91c46\" (UID: \"f5490b62-8700-4c9c-b4f7-517c71f91c46\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350675 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") pod \"26267a37-c8e7-45b3-af7f-8050a58cb697\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350506 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c6c0cd3-99cd-454e-8ceb-000141c59c2b" (UID: "1c6c0cd3-99cd-454e-8ceb-000141c59c2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350762 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") pod \"26267a37-c8e7-45b3-af7f-8050a58cb697\" (UID: \"26267a37-c8e7-45b3-af7f-8050a58cb697\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.350850 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") pod \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\" (UID: \"1c6c0cd3-99cd-454e-8ceb-000141c59c2b\") " Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351102 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26267a37-c8e7-45b3-af7f-8050a58cb697" (UID: "26267a37-c8e7-45b3-af7f-8050a58cb697"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351465 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351488 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26267a37-c8e7-45b3-af7f-8050a58cb697-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.351729 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5490b62-8700-4c9c-b4f7-517c71f91c46" (UID: "f5490b62-8700-4c9c-b4f7-517c71f91c46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.353810 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2" (OuterVolumeSpecName: "kube-api-access-g5sq2") pod "26267a37-c8e7-45b3-af7f-8050a58cb697" (UID: "26267a37-c8e7-45b3-af7f-8050a58cb697"). InnerVolumeSpecName "kube-api-access-g5sq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.354432 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b" (OuterVolumeSpecName: "kube-api-access-gjj4b") pod "f5490b62-8700-4c9c-b4f7-517c71f91c46" (UID: "f5490b62-8700-4c9c-b4f7-517c71f91c46"). InnerVolumeSpecName "kube-api-access-gjj4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.354467 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw" (OuterVolumeSpecName: "kube-api-access-7ckgw") pod "1c6c0cd3-99cd-454e-8ceb-000141c59c2b" (UID: "1c6c0cd3-99cd-454e-8ceb-000141c59c2b"). InnerVolumeSpecName "kube-api-access-7ckgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453521 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ckgw\" (UniqueName: \"kubernetes.io/projected/1c6c0cd3-99cd-454e-8ceb-000141c59c2b-kube-api-access-7ckgw\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453553 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjj4b\" (UniqueName: \"kubernetes.io/projected/f5490b62-8700-4c9c-b4f7-517c71f91c46-kube-api-access-gjj4b\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453579 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5490b62-8700-4c9c-b4f7-517c71f91c46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.453589 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5sq2\" (UniqueName: \"kubernetes.io/projected/26267a37-c8e7-45b3-af7f-8050a58cb697-kube-api-access-g5sq2\") on node \"crc\" DevicePath \"\"" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.637998 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-p7n6d" event={"ID":"d291ef2c-2cdb-47be-b508-efd4c8282791","Type":"ContainerDied","Data":"354c4d1205f62964b6c1a29a854be88a05fd0d3d7efd2997db8c40286434d404"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.638041 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354c4d1205f62964b6c1a29a854be88a05fd0d3d7efd2997db8c40286434d404" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.638096 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-p7n6d" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.648174 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-61ae-account-create-update-8l5nb" event={"ID":"f5490b62-8700-4c9c-b4f7-517c71f91c46","Type":"ContainerDied","Data":"05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.648217 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e9b45d94e7fcccfb2e6e300cde6431706cda1fe65b370d23fe1c1d0403d033" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.648294 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-61ae-account-create-update-8l5nb" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.655168 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-622f-account-create-update-xxrl4" event={"ID":"1c6c0cd3-99cd-454e-8ceb-000141c59c2b","Type":"ContainerDied","Data":"bc4a3190c6737b7176126e0691adaa417a5283eba84a5b73be121b2b6188d4db"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.655208 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4a3190c6737b7176126e0691adaa417a5283eba84a5b73be121b2b6188d4db" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.655276 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-622f-account-create-update-xxrl4" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.663308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"48437608a96e028280723cc171cbf034f324bf38044abaf69122dd3bd5845435"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.663370 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"9750871e7ae5708670ee16c69b20ad76a8fbbd279ada870e92a75b6ca7bbac5f"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.665148 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qtwt7" event={"ID":"26267a37-c8e7-45b3-af7f-8050a58cb697","Type":"ContainerDied","Data":"1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca"} Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.665221 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d23bc38de59921a6f323734ac4e1506c8ba548b7a5e857c8e6644a619bf98ca" Jan 30 10:29:58 crc kubenswrapper[4984]: I0130 10:29:58.665300 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qtwt7" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033337 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033747 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033770 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033791 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033802 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033812 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033820 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033837 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033845 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033856 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033863 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: E0130 10:29:59.033884 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.033892 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034085 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034102 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034117 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034134 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034143 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" containerName="mariadb-database-create" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034151 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" containerName="mariadb-account-create-update" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.034892 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.037204 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.053984 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.167177 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.167265 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.268913 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.269005 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.270016 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.308173 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"root-account-create-update-79vbd\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " pod="openstack/root-account-create-update-79vbd" Jan 30 10:29:59 crc kubenswrapper[4984]: I0130 10:29:59.361158 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.155184 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.156356 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.160912 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.166686 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.190071 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.285812 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.285881 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.285926 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.387434 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.387504 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.387550 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.388548 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.397456 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.415206 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"collect-profiles-29496150-kfrjt\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.495528 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.687845 4984 generic.go:334] "Generic (PLEG): container finished" podID="bfce8525-20d3-4c57-9638-37a46571c375" containerID="b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90" exitCode=0 Jan 30 10:30:00 crc kubenswrapper[4984]: I0130 10:30:00.688131 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerDied","Data":"b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90"} Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.408477 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528007 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528181 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528263 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.528315 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") pod \"bfce8525-20d3-4c57-9638-37a46571c375\" (UID: \"bfce8525-20d3-4c57-9638-37a46571c375\") " Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.533808 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn" (OuterVolumeSpecName: "kube-api-access-ft8sn") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "kube-api-access-ft8sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.555805 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.555944 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.597750 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data" (OuterVolumeSpecName: "config-data") pod "bfce8525-20d3-4c57-9638-37a46571c375" (UID: "bfce8525-20d3-4c57-9638-37a46571c375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630455 4984 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630481 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630490 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfce8525-20d3-4c57-9638-37a46571c375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.630523 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft8sn\" (UniqueName: \"kubernetes.io/projected/bfce8525-20d3-4c57-9638-37a46571c375-kube-api-access-ft8sn\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.713307 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"c9ff95e5d9c3f7e2c0351ae47b4717eac8980d45a30b5c0fbbc2e013b3c72671"} Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.714308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v95fj" event={"ID":"bfce8525-20d3-4c57-9638-37a46571c375","Type":"ContainerDied","Data":"263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d"} Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.714346 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263f2f735a7d4cca3e387c85ad5d4e3b577ff4bf06433a3f7cd146596fb1c19d" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.714368 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v95fj" Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.783538 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 10:30:02 crc kubenswrapper[4984]: I0130 10:30:02.797667 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:30:02 crc kubenswrapper[4984]: W0130 10:30:02.798883 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c13999b_7269_403d_8be6_78d42f65f26c.slice/crio-d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6 WatchSource:0}: Error finding container d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6: Status 404 returned error can't find the container with id d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6 Jan 30 10:30:02 crc kubenswrapper[4984]: W0130 10:30:02.802681 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e5b8e4_a5a3_4cca_bb11_a627d40f3dc1.slice/crio-286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05 WatchSource:0}: Error finding container 286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05: Status 404 returned error can't find the container with id 286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05 Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.001120 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.001475 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.083935 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:03 crc kubenswrapper[4984]: E0130 10:30:03.084297 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfce8525-20d3-4c57-9638-37a46571c375" containerName="glance-db-sync" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.084309 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfce8525-20d3-4c57-9638-37a46571c375" containerName="glance-db-sync" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.084479 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfce8525-20d3-4c57-9638-37a46571c375" containerName="glance-db-sync" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.085496 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.098833 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254427 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254517 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.254592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.355922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.355964 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356006 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356046 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356090 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.356999 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.357028 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.358009 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.362029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.374826 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"dnsmasq-dns-74dc88fc-xvwrv\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.404896 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.733404 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerStarted","Data":"429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.745780 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"7ec6f6ab3452d9616862534ade3255f4ce8a1976c78b1253278295d1327bedf7"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.745834 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"d6a28f165cb828c824461c4855e664f5762b43f3523c7ff8582e2bd062832103"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.745846 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"2bc42dc2361aaae3eac11f19a64fd2db20593f996c5450204d788d00e4a9cbfc"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.750911 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerStarted","Data":"9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.750965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerStarted","Data":"286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.752861 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-whl8p" podStartSLOduration=2.805698036 podStartE2EDuration="9.752844406s" podCreationTimestamp="2026-01-30 10:29:54 +0000 UTC" firstStartedPulling="2026-01-30 10:29:55.734450059 +0000 UTC m=+1100.300753893" lastFinishedPulling="2026-01-30 10:30:02.681596419 +0000 UTC m=+1107.247900263" observedRunningTime="2026-01-30 10:30:03.748107206 +0000 UTC m=+1108.314411030" watchObservedRunningTime="2026-01-30 10:30:03.752844406 +0000 UTC m=+1108.319148230" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.767375 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c13999b-7269-403d-8be6-78d42f65f26c" containerID="673987907c6890a3da91b3b133a9ad126ca5110425aedf8c5b019ce181470176" exitCode=0 Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.767425 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" event={"ID":"5c13999b-7269-403d-8be6-78d42f65f26c","Type":"ContainerDied","Data":"673987907c6890a3da91b3b133a9ad126ca5110425aedf8c5b019ce181470176"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.767451 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" event={"ID":"5c13999b-7269-403d-8be6-78d42f65f26c","Type":"ContainerStarted","Data":"d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6"} Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.769220 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-79vbd" podStartSLOduration=4.769201291 podStartE2EDuration="4.769201291s" podCreationTimestamp="2026-01-30 10:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:03.765516991 +0000 UTC m=+1108.331820825" watchObservedRunningTime="2026-01-30 10:30:03.769201291 +0000 UTC m=+1108.335505115" Jan 30 10:30:03 crc kubenswrapper[4984]: I0130 10:30:03.901193 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.776543 4984 generic.go:334] "Generic (PLEG): container finished" podID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerID="9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423" exitCode=0 Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.776639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerDied","Data":"9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.779735 4984 generic.go:334] "Generic (PLEG): container finished" podID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerID="a3f7cfb2a4a336f740db618b7e51bd18d2a14a7494b62a64dc35117351fff550" exitCode=0 Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.779811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerDied","Data":"a3f7cfb2a4a336f740db618b7e51bd18d2a14a7494b62a64dc35117351fff550"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.779839 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerStarted","Data":"6d30e54b31da488e5549e257dd0db6b5ad4bbb86668a8fa8669b611ed7db9b43"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.809236 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b286d6-b58f-4d49-ae49-e3acdc77b7f5","Type":"ContainerStarted","Data":"55e7a1871bdaa3502d542ee6951a043704271b2d5d204c7e19e40d691c88e49e"} Jan 30 10:30:04 crc kubenswrapper[4984]: I0130 10:30:04.869142 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.726632932 podStartE2EDuration="35.86911968s" podCreationTimestamp="2026-01-30 10:29:29 +0000 UTC" firstStartedPulling="2026-01-30 10:29:50.916709248 +0000 UTC m=+1095.483013112" lastFinishedPulling="2026-01-30 10:29:58.059196036 +0000 UTC m=+1102.625499860" observedRunningTime="2026-01-30 10:30:04.863016523 +0000 UTC m=+1109.429320357" watchObservedRunningTime="2026-01-30 10:30:04.86911968 +0000 UTC m=+1109.435423504" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.139997 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.218508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.248084 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:05 crc kubenswrapper[4984]: E0130 10:30:05.248503 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" containerName="collect-profiles" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.248527 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" containerName="collect-profiles" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.248734 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" containerName="collect-profiles" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.249862 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.251787 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.259931 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.286776 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") pod \"5c13999b-7269-403d-8be6-78d42f65f26c\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.286921 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") pod \"5c13999b-7269-403d-8be6-78d42f65f26c\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.286979 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") pod \"5c13999b-7269-403d-8be6-78d42f65f26c\" (UID: \"5c13999b-7269-403d-8be6-78d42f65f26c\") " Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.287899 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c13999b-7269-403d-8be6-78d42f65f26c" (UID: "5c13999b-7269-403d-8be6-78d42f65f26c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.302953 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979" (OuterVolumeSpecName: "kube-api-access-9q979") pod "5c13999b-7269-403d-8be6-78d42f65f26c" (UID: "5c13999b-7269-403d-8be6-78d42f65f26c"). InnerVolumeSpecName "kube-api-access-9q979". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.311901 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c13999b-7269-403d-8be6-78d42f65f26c" (UID: "5c13999b-7269-403d-8be6-78d42f65f26c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.388926 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389307 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389353 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389397 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389443 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389496 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389588 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c13999b-7269-403d-8be6-78d42f65f26c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389617 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c13999b-7269-403d-8be6-78d42f65f26c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.389630 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q979\" (UniqueName: \"kubernetes.io/projected/5c13999b-7269-403d-8be6-78d42f65f26c-kube-api-access-9q979\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.491825 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.491942 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492083 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492172 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492325 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.492730 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493071 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.493524 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.511023 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"dnsmasq-dns-5f59b8f679-kb757\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.634063 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.842413 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.842713 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt" event={"ID":"5c13999b-7269-403d-8be6-78d42f65f26c","Type":"ContainerDied","Data":"d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6"} Jan 30 10:30:05 crc kubenswrapper[4984]: I0130 10:30:05.842794 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9904009c49bfe0503d76c234a2c4cf7dc4b63f787e9f52d980b1cac56c748f6" Jan 30 10:30:06 crc kubenswrapper[4984]: W0130 10:30:06.096790 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90d82977_c98d_495c_bb24_89cbe285c74e.slice/crio-95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c WatchSource:0}: Error finding container 95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c: Status 404 returned error can't find the container with id 95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.107342 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.137281 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.203114 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") pod \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.203608 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") pod \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\" (UID: \"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1\") " Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.204012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" (UID: "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.209380 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442" (OuterVolumeSpecName: "kube-api-access-wq442") pod "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" (UID: "07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1"). InnerVolumeSpecName "kube-api-access-wq442". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.305773 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq442\" (UniqueName: \"kubernetes.io/projected/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-kube-api-access-wq442\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.305816 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.851913 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerStarted","Data":"c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c"} Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.852969 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerStarted","Data":"95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c"} Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.854305 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-79vbd" event={"ID":"07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1","Type":"ContainerDied","Data":"286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05"} Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.854339 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="286ef0f30eacf31fcdd3b400d6177592b655bc3929e9c91e181f393d4e489e05" Jan 30 10:30:06 crc kubenswrapper[4984]: I0130 10:30:06.854387 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-79vbd" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.862849 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerStarted","Data":"d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.863225 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.862957 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" containerID="cri-o://c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c" gracePeriod=10 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:07.887637 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" podStartSLOduration=4.887615919 podStartE2EDuration="4.887615919s" podCreationTimestamp="2026-01-30 10:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:07.879565319 +0000 UTC m=+1112.445869143" watchObservedRunningTime="2026-01-30 10:30:07.887615919 +0000 UTC m=+1112.453919743" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.879648 4984 generic.go:334] "Generic (PLEG): container finished" podID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerID="c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c" exitCode=0 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.879834 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerDied","Data":"c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.881470 4984 generic.go:334] "Generic (PLEG): container finished" podID="90d82977-c98d-495c-bb24-89cbe285c74e" containerID="d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4" exitCode=0 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:09.881496 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerDied","Data":"d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.258867 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.373814 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.373917 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.373967 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.374076 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.374092 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") pod \"18c601d8-9d81-458a-b7d4-e0a68704af03\" (UID: \"18c601d8-9d81-458a-b7d4-e0a68704af03\") " Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.380374 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf" (OuterVolumeSpecName: "kube-api-access-nvxvf") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "kube-api-access-nvxvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.420905 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.425480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.426653 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.432947 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config" (OuterVolumeSpecName: "config") pod "18c601d8-9d81-458a-b7d4-e0a68704af03" (UID: "18c601d8-9d81-458a-b7d4-e0a68704af03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475453 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475487 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475501 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475514 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvxvf\" (UniqueName: \"kubernetes.io/projected/18c601d8-9d81-458a-b7d4-e0a68704af03-kube-api-access-nvxvf\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.475526 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18c601d8-9d81-458a-b7d4-e0a68704af03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.616668 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.623865 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-79vbd"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.897055 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerStarted","Data":"7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.898423 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" event={"ID":"18c601d8-9d81-458a-b7d4-e0a68704af03","Type":"ContainerDied","Data":"6d30e54b31da488e5549e257dd0db6b5ad4bbb86668a8fa8669b611ed7db9b43"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.898451 4984 scope.go:117] "RemoveContainer" containerID="c940ec8371870be43e70fc92b66ca253c935b32ca13dd9677b0a5e4218a2448c" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.898556 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-xvwrv" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.977113 4984 scope.go:117] "RemoveContainer" containerID="a3f7cfb2a4a336f740db618b7e51bd18d2a14a7494b62a64dc35117351fff550" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.982602 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:10.989216 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-xvwrv"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:11.907783 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:11.938873 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podStartSLOduration=6.938849645 podStartE2EDuration="6.938849645s" podCreationTimestamp="2026-01-30 10:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:11.932352228 +0000 UTC m=+1116.498656072" watchObservedRunningTime="2026-01-30 10:30:11.938849645 +0000 UTC m=+1116.505153469" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:12.102230 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" path="/var/lib/kubelet/pods/07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1/volumes" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:12.103840 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" path="/var/lib/kubelet/pods/18c601d8-9d81-458a-b7d4-e0a68704af03/volumes" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.629412 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:30:25 crc kubenswrapper[4984]: E0130 10:30:15.630053 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630065 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" Jan 30 10:30:25 crc kubenswrapper[4984]: E0130 10:30:15.630080 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerName="mariadb-account-create-update" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630087 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerName="mariadb-account-create-update" Jan 30 10:30:25 crc kubenswrapper[4984]: E0130 10:30:15.630118 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="init" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630126 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="init" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630328 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c601d8-9d81-458a-b7d4-e0a68704af03" containerName="dnsmasq-dns" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630341 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e5b8e4-a5a3-4cca-bb11-a627d40f3dc1" containerName="mariadb-account-create-update" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.630849 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.633111 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.635644 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.646134 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.707065 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.707439 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" containerID="cri-o://ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0" gracePeriod=10 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.757697 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.757844 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.859497 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.859606 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.860426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.883130 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"root-account-create-update-l7nmz\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:15.956860 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:17.974103 4984 generic.go:334] "Generic (PLEG): container finished" podID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerID="ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0" exitCode=0 Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:17.974195 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerDied","Data":"ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0"} Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:19.882587 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 30 10:30:25 crc kubenswrapper[4984]: I0130 10:30:24.883482 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.053069 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" event={"ID":"3333aa79-f6c6-4ae8-9b45-233127846dff","Type":"ContainerDied","Data":"855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec"} Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.053435 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="855d911eeddd5e7931226a1879c26af317a99a74d6227a42ce5489ea31a590ec" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.101049 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141653 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141732 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141792 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141854 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.141879 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") pod \"3333aa79-f6c6-4ae8-9b45-233127846dff\" (UID: \"3333aa79-f6c6-4ae8-9b45-233127846dff\") " Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.162638 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz" (OuterVolumeSpecName: "kube-api-access-57hbz") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "kube-api-access-57hbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.185482 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.193427 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.200843 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.214118 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config" (OuterVolumeSpecName: "config") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.217734 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3333aa79-f6c6-4ae8-9b45-233127846dff" (UID: "3333aa79-f6c6-4ae8-9b45-233127846dff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243484 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57hbz\" (UniqueName: \"kubernetes.io/projected/3333aa79-f6c6-4ae8-9b45-233127846dff-kube-api-access-57hbz\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243521 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243534 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243546 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:26 crc kubenswrapper[4984]: I0130 10:30:26.243559 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3333aa79-f6c6-4ae8-9b45-233127846dff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.059352 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b9djm" Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.059369 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l7nmz" event={"ID":"ca9a5e83-0bd4-4550-a3c9-e297cc831e99","Type":"ContainerStarted","Data":"38b8acdbf22de2e491ba98f9e89d9b406019ed29c7572f333da243974dfc7540"} Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.098543 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:30:27 crc kubenswrapper[4984]: I0130 10:30:27.106418 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b9djm"] Jan 30 10:30:28 crc kubenswrapper[4984]: I0130 10:30:28.072622 4984 generic.go:334] "Generic (PLEG): container finished" podID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerID="0e27973ea9b1e09e6fd759eac37e1b5558d22ece2091da32401b555f34855ccf" exitCode=0 Jan 30 10:30:28 crc kubenswrapper[4984]: I0130 10:30:28.072705 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l7nmz" event={"ID":"ca9a5e83-0bd4-4550-a3c9-e297cc831e99","Type":"ContainerDied","Data":"0e27973ea9b1e09e6fd759eac37e1b5558d22ece2091da32401b555f34855ccf"} Jan 30 10:30:28 crc kubenswrapper[4984]: I0130 10:30:28.103474 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" path="/var/lib/kubelet/pods/3333aa79-f6c6-4ae8-9b45-233127846dff/volumes" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.423541 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.497607 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") pod \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.497669 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") pod \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\" (UID: \"ca9a5e83-0bd4-4550-a3c9-e297cc831e99\") " Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.498468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca9a5e83-0bd4-4550-a3c9-e297cc831e99" (UID: "ca9a5e83-0bd4-4550-a3c9-e297cc831e99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.503680 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd" (OuterVolumeSpecName: "kube-api-access-rfbhd") pod "ca9a5e83-0bd4-4550-a3c9-e297cc831e99" (UID: "ca9a5e83-0bd4-4550-a3c9-e297cc831e99"). InnerVolumeSpecName "kube-api-access-rfbhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.600575 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:29 crc kubenswrapper[4984]: I0130 10:30:29.601733 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbhd\" (UniqueName: \"kubernetes.io/projected/ca9a5e83-0bd4-4550-a3c9-e297cc831e99-kube-api-access-rfbhd\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:30 crc kubenswrapper[4984]: I0130 10:30:30.092187 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l7nmz" Jan 30 10:30:30 crc kubenswrapper[4984]: I0130 10:30:30.103678 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l7nmz" event={"ID":"ca9a5e83-0bd4-4550-a3c9-e297cc831e99","Type":"ContainerDied","Data":"38b8acdbf22de2e491ba98f9e89d9b406019ed29c7572f333da243974dfc7540"} Jan 30 10:30:30 crc kubenswrapper[4984]: I0130 10:30:30.103737 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b8acdbf22de2e491ba98f9e89d9b406019ed29c7572f333da243974dfc7540" Jan 30 10:30:33 crc kubenswrapper[4984]: I0130 10:30:33.000868 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:30:33 crc kubenswrapper[4984]: I0130 10:30:33.001405 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:30:34 crc kubenswrapper[4984]: I0130 10:30:34.153886 4984 generic.go:334] "Generic (PLEG): container finished" podID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerID="429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534" exitCode=0 Jan 30 10:30:34 crc kubenswrapper[4984]: I0130 10:30:34.153959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerDied","Data":"429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534"} Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.501385 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.606036 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") pod \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.606073 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") pod \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.606280 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") pod \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\" (UID: \"58c1d730-34f1-4912-a0e9-f19d10e9ec9b\") " Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.611486 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg" (OuterVolumeSpecName: "kube-api-access-nsfvg") pod "58c1d730-34f1-4912-a0e9-f19d10e9ec9b" (UID: "58c1d730-34f1-4912-a0e9-f19d10e9ec9b"). InnerVolumeSpecName "kube-api-access-nsfvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.627774 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58c1d730-34f1-4912-a0e9-f19d10e9ec9b" (UID: "58c1d730-34f1-4912-a0e9-f19d10e9ec9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.648886 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data" (OuterVolumeSpecName: "config-data") pod "58c1d730-34f1-4912-a0e9-f19d10e9ec9b" (UID: "58c1d730-34f1-4912-a0e9-f19d10e9ec9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.708364 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.708405 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsfvg\" (UniqueName: \"kubernetes.io/projected/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-kube-api-access-nsfvg\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:35 crc kubenswrapper[4984]: I0130 10:30:35.708415 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58c1d730-34f1-4912-a0e9-f19d10e9ec9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.177727 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-whl8p" event={"ID":"58c1d730-34f1-4912-a0e9-f19d10e9ec9b","Type":"ContainerDied","Data":"8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5"} Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.177793 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e36d025623f6e33bc8b1652b26598d1e93ca2c6c9ba92c8616289c84bc2d1f5" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.177849 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-whl8p" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.435942 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436501 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerName="mariadb-account-create-update" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436517 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerName="mariadb-account-create-update" Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436531 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436538 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436567 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerName="keystone-db-sync" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436574 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerName="keystone-db-sync" Jan 30 10:30:36 crc kubenswrapper[4984]: E0130 10:30:36.436586 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="init" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436591 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="init" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436729 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" containerName="keystone-db-sync" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436749 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" containerName="mariadb-account-create-update" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.436759 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3333aa79-f6c6-4ae8-9b45-233127846dff" containerName="dnsmasq-dns" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.437304 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.439194 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.440585 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.440883 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.440956 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.441804 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.450090 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.451904 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.472691 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.489575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.526383 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528260 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528753 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528872 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.528985 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538635 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538741 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538776 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538834 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.538858 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.603428 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.604808 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.620978 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.621261 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-527mc" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.621489 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.622166 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.628824 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640559 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640624 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640643 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640671 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640699 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640739 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640770 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640799 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640818 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.640872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.641878 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.641994 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.642488 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.644600 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.652817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.656651 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.673010 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.676426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.676924 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.681102 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.681143 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"keystone-bootstrap-62mq2\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.708036 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"dnsmasq-dns-bbf5cc879-r4m7s\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.716313 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.717451 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.730317 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-t4jkv" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.731699 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.731988 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742621 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742656 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742740 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.742761 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.754784 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.765478 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.768634 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.836231 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.837362 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.840778 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7t44v" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.840952 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.841056 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846148 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846221 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846309 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846396 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846417 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846453 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846485 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846511 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846572 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.846626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.850143 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.850533 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.851348 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.854537 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.856412 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.860785 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.861013 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.864056 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.870956 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.887695 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"horizon-7c95864f45-hf2gl\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.902559 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.903874 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.934912 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.955553 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956805 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956861 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956909 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956952 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.956982 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966209 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966288 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966313 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966332 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966377 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966413 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966566 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.966438 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970592 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970757 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.970784 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.979968 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.984517 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.985813 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:36 crc kubenswrapper[4984]: I0130 10:30:36.987060 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:36.999843 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.008808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"cinder-db-sync-4q4x7\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.036504 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.040239 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.054162 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.054358 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.054464 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gnpsj" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072178 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072224 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072273 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072296 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072312 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072359 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072377 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072395 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072410 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072428 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072444 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072481 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072504 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.072535 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.073698 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.074007 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.074069 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.077638 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.088062 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.099836 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.100299 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.102238 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.103289 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.105092 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.109283 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.115181 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.115666 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"neutron-db-sync-5hx59\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.119085 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.120125 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.133342 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"ceilometer-0\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.134205 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"horizon-6c5bb97f77-vgk6b\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176292 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176349 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176442 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176498 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.176594 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.193073 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.214327 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.215692 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.230859 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.231433 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.238471 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.238639 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dbvq5" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.246549 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.258800 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.262266 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.276484 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278089 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278912 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278952 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.278981 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279013 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279065 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279122 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.279976 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.280109 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.280340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.283478 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.283652 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-94rmf" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.283915 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.287450 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.287755 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.287886 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.290517 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.293471 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.301378 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"placement-db-sync-bfzdw\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.314560 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.343728 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382100 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382150 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382176 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382191 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382514 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382546 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382587 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382657 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382685 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.382713 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383216 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383267 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383285 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.383306 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.387411 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.388406 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.405350 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.407234 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"barbican-db-sync-pxnz6\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485397 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485436 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485472 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485489 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485525 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485543 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485560 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485641 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485657 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485681 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485699 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485715 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.485736 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.487493 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.488398 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.488662 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.489690 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.490197 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.490688 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.490936 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.491658 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.492552 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.494673 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.499528 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.505630 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.506617 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"dnsmasq-dns-56df8fb6b7-qkbrd\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.507591 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.559939 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.560862 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.580606 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.597622 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.611512 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.627035 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.709702 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.711398 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.715267 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.715301 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.731597 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.739648 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793617 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793880 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793925 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793943 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.793980 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.794016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.794041 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.794059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.814229 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.823516 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895673 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895723 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895740 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895777 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895837 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.895854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.896406 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.898245 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.899573 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.901754 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.908577 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.909411 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.921081 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.922607 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.923528 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:37 crc kubenswrapper[4984]: I0130 10:30:37.950127 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.048502 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:30:38 crc kubenswrapper[4984]: W0130 10:30:38.056105 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4548afd2_be23_4ea7_a5a4_14b8fad4f5fa.slice/crio-fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4 WatchSource:0}: Error finding container fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4: Status 404 returned error can't find the container with id fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4 Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.057682 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.066917 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.216939 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c95864f45-hf2gl" event={"ID":"ea17d23b-4f8b-425c-bc10-f6bd35f661bf","Type":"ContainerStarted","Data":"a0f8ade3bd22a088d1a93fb53e8dade691d3e07b79328b0d3ec8e2f6f2ae944b"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.218415 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"33596f8966073af30764199a2a914ff3e9f8caa7ca44b53b652d4e885a08aa2f"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.219508 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerStarted","Data":"bac16c50dbc54a56989a819b2fb558872bce9cd29de279c380d506e6e46a94f0"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.220380 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerStarted","Data":"8c1d0f7dc02303cf5bb0d029a247772d55790cec54bba645727d9dadb4e4bde2"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.222398 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" event={"ID":"1179d293-414e-4b1d-8020-37147612b45f","Type":"ContainerStarted","Data":"b218f58fbde2b318e5bfbbfe5598d78b25851ef8311979e7ba06842db2e42d84"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.225692 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerStarted","Data":"b68d87d9bb96723ca8797d2627af124dde7df989f62624faf45fa6f9a02a018d"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.227204 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5bb97f77-vgk6b" event={"ID":"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa","Type":"ContainerStarted","Data":"fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.228242 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerStarted","Data":"b22cfaa6ea4686fc0571245806e8e06ec7680b75dec3155d20471ab3af1337c6"} Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.255815 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.290232 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: W0130 10:30:38.328684 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17f579b7_9f28_42f6_a7be_b7c562962f19.slice/crio-46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1 WatchSource:0}: Error finding container 46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1: Status 404 returned error can't find the container with id 46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1 Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.329796 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.461206 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.516578 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.549788 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.557440 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.604906 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.624913 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625020 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625054 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625098 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.625125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.627392 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.660333 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731294 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731353 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731424 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.731469 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.733657 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.734045 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.734239 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.735853 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.782986 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.798828 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"horizon-59c58ffc9c-jj2qg\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: I0130 10:30:38.868679 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:30:38 crc kubenswrapper[4984]: W0130 10:30:38.871219 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb045c802_b737_4590_82c8_e8a3a54247dc.slice/crio-a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a WatchSource:0}: Error finding container a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a: Status 404 returned error can't find the container with id a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.243433 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerStarted","Data":"82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.248478 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerStarted","Data":"a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.255009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerStarted","Data":"886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.257234 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerStarted","Data":"c150541fb16c40a06f1f4b6b64bfff01ebc0687acbccdc05a1f6c7f17f0d9920"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.267041 4984 generic.go:334] "Generic (PLEG): container finished" podID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" exitCode=0 Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.267373 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerDied","Data":"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.267414 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerStarted","Data":"46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.273361 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerStarted","Data":"b1fa41772b71982e34712806d8e4239d302c810441b3b76f99d64938a74e6924"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.281159 4984 generic.go:334] "Generic (PLEG): container finished" podID="1179d293-414e-4b1d-8020-37147612b45f" containerID="622c37ecc830de634c356c96b7be30fd48b8c32ceb786ebc7543d1e4ebb9245c" exitCode=0 Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.281223 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" event={"ID":"1179d293-414e-4b1d-8020-37147612b45f","Type":"ContainerDied","Data":"622c37ecc830de634c356c96b7be30fd48b8c32ceb786ebc7543d1e4ebb9245c"} Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.286782 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5hx59" podStartSLOduration=3.286765196 podStartE2EDuration="3.286765196s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:39.281705969 +0000 UTC m=+1143.848009793" watchObservedRunningTime="2026-01-30 10:30:39.286765196 +0000 UTC m=+1143.853069020" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.289773 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-62mq2" podStartSLOduration=3.289764468 podStartE2EDuration="3.289764468s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:39.265401954 +0000 UTC m=+1143.831705778" watchObservedRunningTime="2026-01-30 10:30:39.289764468 +0000 UTC m=+1143.856068282" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.414840 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.608239 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.680112 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.680283 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.680762 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.681404 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.681437 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.681476 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") pod \"1179d293-414e-4b1d-8020-37147612b45f\" (UID: \"1179d293-414e-4b1d-8020-37147612b45f\") " Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.689402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn" (OuterVolumeSpecName: "kube-api-access-2m4vn") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "kube-api-access-2m4vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.735402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.735913 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.736769 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.737448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config" (OuterVolumeSpecName: "config") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.735571 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1179d293-414e-4b1d-8020-37147612b45f" (UID: "1179d293-414e-4b1d-8020-37147612b45f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791558 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791600 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791614 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791625 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791634 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1179d293-414e-4b1d-8020-37147612b45f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:39 crc kubenswrapper[4984]: I0130 10:30:39.791644 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m4vn\" (UniqueName: \"kubernetes.io/projected/1179d293-414e-4b1d-8020-37147612b45f-kube-api-access-2m4vn\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.297280 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerStarted","Data":"ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.304472 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerStarted","Data":"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.304549 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.311121 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerStarted","Data":"cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.316676 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.316822 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-r4m7s" event={"ID":"1179d293-414e-4b1d-8020-37147612b45f","Type":"ContainerDied","Data":"b218f58fbde2b318e5bfbbfe5598d78b25851ef8311979e7ba06842db2e42d84"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.316885 4984 scope.go:117] "RemoveContainer" containerID="622c37ecc830de634c356c96b7be30fd48b8c32ceb786ebc7543d1e4ebb9245c" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.319553 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59c58ffc9c-jj2qg" event={"ID":"c1e19dda-69fc-437b-b42c-727c3cff3813","Type":"ContainerStarted","Data":"9a4e15ad5e7e83774d55c10e4f5efd7b7cb63c50cf1ac1e695342968f871f85d"} Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.358074 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" podStartSLOduration=4.358050865 podStartE2EDuration="4.358050865s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:40.327872442 +0000 UTC m=+1144.894176266" watchObservedRunningTime="2026-01-30 10:30:40.358050865 +0000 UTC m=+1144.924354689" Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.374878 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:40 crc kubenswrapper[4984]: I0130 10:30:40.383955 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-r4m7s"] Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.332058 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerStarted","Data":"ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588"} Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.334749 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerStarted","Data":"6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d"} Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.334964 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" containerID="cri-o://cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c" gracePeriod=30 Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.335051 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" containerID="cri-o://6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d" gracePeriod=30 Jan 30 10:30:41 crc kubenswrapper[4984]: I0130 10:30:41.357623 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.357605868 podStartE2EDuration="4.357605868s" podCreationTimestamp="2026-01-30 10:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:41.356319312 +0000 UTC m=+1145.922623136" watchObservedRunningTime="2026-01-30 10:30:41.357605868 +0000 UTC m=+1145.923909692" Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.108714 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1179d293-414e-4b1d-8020-37147612b45f" path="/var/lib/kubelet/pods/1179d293-414e-4b1d-8020-37147612b45f/volumes" Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.347915 4984 generic.go:334] "Generic (PLEG): container finished" podID="cfc24334-4217-4656-9b38-281626334606" containerID="cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c" exitCode=143 Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.348013 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerDied","Data":"cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c"} Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.348090 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" containerID="cri-o://ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719" gracePeriod=30 Jan 30 10:30:42 crc kubenswrapper[4984]: I0130 10:30:42.348177 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" containerID="cri-o://ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588" gracePeriod=30 Jan 30 10:30:42 crc kubenswrapper[4984]: E0130 10:30:42.624364 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb045c802_b737_4590_82c8_e8a3a54247dc.slice/crio-ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588.scope\": RecentStats: unable to find data in memory cache]" Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.360209 4984 generic.go:334] "Generic (PLEG): container finished" podID="cfc24334-4217-4656-9b38-281626334606" containerID="6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d" exitCode=0 Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.360289 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerDied","Data":"6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d"} Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362922 4984 generic.go:334] "Generic (PLEG): container finished" podID="b045c802-b737-4590-82c8-e8a3a54247dc" containerID="ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588" exitCode=0 Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362950 4984 generic.go:334] "Generic (PLEG): container finished" podID="b045c802-b737-4590-82c8-e8a3a54247dc" containerID="ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719" exitCode=143 Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerDied","Data":"ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588"} Jan 30 10:30:43 crc kubenswrapper[4984]: I0130 10:30:43.362994 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerDied","Data":"ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719"} Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.085893 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.085871631 podStartE2EDuration="9.085871631s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:30:42.378686577 +0000 UTC m=+1146.944990411" watchObservedRunningTime="2026-01-30 10:30:45.085871631 +0000 UTC m=+1149.652175455" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.095115 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.148192 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:30:45 crc kubenswrapper[4984]: E0130 10:30:45.148897 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1179d293-414e-4b1d-8020-37147612b45f" containerName="init" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.148924 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1179d293-414e-4b1d-8020-37147612b45f" containerName="init" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.149237 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1179d293-414e-4b1d-8020-37147612b45f" containerName="init" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.150630 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.154455 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.192116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198670 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198709 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198741 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.198769 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.199270 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.199354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.199565 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.219871 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.233067 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cb76cb6cb-wtx8d"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.237848 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.249066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb76cb6cb-wtx8d"] Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301017 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301071 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-combined-ca-bundle\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301093 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301110 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301127 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-secret-key\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301160 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301176 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-scripts\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301386 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-config-data\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301523 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpl8\" (UniqueName: \"kubernetes.io/projected/d1c7d24e-f131-485d-aaec-80a94d7ddd96-kube-api-access-sgpl8\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301586 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c7d24e-f131-485d-aaec-80a94d7ddd96-logs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301622 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301642 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301676 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-tls-certs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.301707 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.302198 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.302424 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.306877 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.307387 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.318337 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.332801 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"horizon-6b65cc758d-9hz7t\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403436 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-config-data\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403539 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgpl8\" (UniqueName: \"kubernetes.io/projected/d1c7d24e-f131-485d-aaec-80a94d7ddd96-kube-api-access-sgpl8\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c7d24e-f131-485d-aaec-80a94d7ddd96-logs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403713 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-tls-certs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403767 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-combined-ca-bundle\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403793 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-secret-key\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.403830 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-scripts\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.404042 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1c7d24e-f131-485d-aaec-80a94d7ddd96-logs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.404844 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-scripts\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.405728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1c7d24e-f131-485d-aaec-80a94d7ddd96-config-data\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.407760 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-combined-ca-bundle\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.407768 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-tls-certs\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.408521 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d1c7d24e-f131-485d-aaec-80a94d7ddd96-horizon-secret-key\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.423402 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgpl8\" (UniqueName: \"kubernetes.io/projected/d1c7d24e-f131-485d-aaec-80a94d7ddd96-kube-api-access-sgpl8\") pod \"horizon-6cb76cb6cb-wtx8d\" (UID: \"d1c7d24e-f131-485d-aaec-80a94d7ddd96\") " pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.483170 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:30:45 crc kubenswrapper[4984]: I0130 10:30:45.558323 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:30:47 crc kubenswrapper[4984]: I0130 10:30:47.614522 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:30:47 crc kubenswrapper[4984]: I0130 10:30:47.698742 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:30:47 crc kubenswrapper[4984]: I0130 10:30:47.699052 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" containerID="cri-o://7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116" gracePeriod=10 Jan 30 10:30:49 crc kubenswrapper[4984]: I0130 10:30:49.420440 4984 generic.go:334] "Generic (PLEG): container finished" podID="90d82977-c98d-495c-bb24-89cbe285c74e" containerID="7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116" exitCode=0 Jan 30 10:30:49 crc kubenswrapper[4984]: I0130 10:30:49.420503 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerDied","Data":"7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116"} Jan 30 10:30:50 crc kubenswrapper[4984]: I0130 10:30:50.635588 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.705590 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.706114 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d6h548h5b7h8bh5bch5c9h66fhc5hf6h59h589h9bh87h5cdh586h5f7h8bh65fh64h688h68hcch9h649h96h9fh88h664hc9h559h654hb8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4hnsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c95864f45-hf2gl_openstack(ea17d23b-4f8b-425c-bc10-f6bd35f661bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.710877 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7c95864f45-hf2gl" podUID="ea17d23b-4f8b-425c-bc10-f6bd35f661bf" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.862451 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.862673 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547h95h5c7h64h54dh554h664h665hf7h79h648h649hbfh666hbbh5d9h588h5d7h5bch564h687h658h79h597hb8h56fhf4h644h5cch5d9h5chc4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ws2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6c5bb97f77-vgk6b_openstack(4548afd2-be23-4ea7-a5a4-14b8fad4f5fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:30:53 crc kubenswrapper[4984]: E0130 10:30:53.865104 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6c5bb97f77-vgk6b" podUID="4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.413613 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.466334 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.495634 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.495661 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b045c802-b737-4590-82c8-e8a3a54247dc","Type":"ContainerDied","Data":"a04efe8f69e0a1c77f17c6cbcbef064de8fe92baf1eb1d209805ac607de8414a"} Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.495706 4984 scope.go:117] "RemoveContainer" containerID="ed4ee05a53ef4c9841ecc94a217e89fc7bdf76f14ea2fa95273b494e65088588" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.500383 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cfc24334-4217-4656-9b38-281626334606","Type":"ContainerDied","Data":"b1fa41772b71982e34712806d8e4239d302c810441b3b76f99d64938a74e6924"} Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.500417 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.503062 4984 generic.go:334] "Generic (PLEG): container finished" podID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerID="82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990" exitCode=0 Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.503135 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerDied","Data":"82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990"} Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519725 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519857 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519911 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519944 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519971 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.519998 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520036 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520071 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520087 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520110 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520136 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520150 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520174 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520191 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520209 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") pod \"cfc24334-4217-4656-9b38-281626334606\" (UID: \"cfc24334-4217-4656-9b38-281626334606\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.520225 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") pod \"b045c802-b737-4590-82c8-e8a3a54247dc\" (UID: \"b045c802-b737-4590-82c8-e8a3a54247dc\") " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.522397 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.523582 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs" (OuterVolumeSpecName: "logs") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.536679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts" (OuterVolumeSpecName: "scripts") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.538718 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.538897 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x" (OuterVolumeSpecName: "kube-api-access-sw66x") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "kube-api-access-sw66x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.540614 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs" (OuterVolumeSpecName: "logs") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.540982 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.543408 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.557027 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6" (OuterVolumeSpecName: "kube-api-access-b5lt6") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "kube-api-access-b5lt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.584878 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.586086 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts" (OuterVolumeSpecName: "scripts") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.591079 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.607574 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data" (OuterVolumeSpecName: "config-data") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.615425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cfc24334-4217-4656-9b38-281626334606" (UID: "cfc24334-4217-4656-9b38-281626334606"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.616901 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data" (OuterVolumeSpecName: "config-data") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623585 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623636 4984 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623662 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623671 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623704 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623712 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfc24334-4217-4656-9b38-281626334606-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623720 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5lt6\" (UniqueName: \"kubernetes.io/projected/b045c802-b737-4590-82c8-e8a3a54247dc-kube-api-access-b5lt6\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623730 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623743 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623751 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623784 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623793 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw66x\" (UniqueName: \"kubernetes.io/projected/cfc24334-4217-4656-9b38-281626334606-kube-api-access-sw66x\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623801 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623809 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfc24334-4217-4656-9b38-281626334606-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.623817 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b045c802-b737-4590-82c8-e8a3a54247dc-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.629480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b045c802-b737-4590-82c8-e8a3a54247dc" (UID: "b045c802-b737-4590-82c8-e8a3a54247dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.640903 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.643936 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.725355 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.725389 4984 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b045c802-b737-4590-82c8-e8a3a54247dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.725404 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.848070 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.867908 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.882577 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.893690 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.920752 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921194 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921218 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921234 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921242 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921284 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921292 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: E0130 10:30:55.921321 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921328 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921515 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921541 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921556 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" containerName="glance-httpd" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.921572 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc24334-4217-4656-9b38-281626334606" containerName="glance-log" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.922621 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.926950 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.927553 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-94rmf" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.927568 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.927794 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.938862 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.941944 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.946382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.946673 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.948877 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:30:55 crc kubenswrapper[4984]: I0130 10:30:55.957831 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030583 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030648 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030679 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030702 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030730 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030757 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030773 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030793 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030811 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030842 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030867 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030890 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030920 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030952 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.030996 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.103611 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b045c802-b737-4590-82c8-e8a3a54247dc" path="/var/lib/kubelet/pods/b045c802-b737-4590-82c8-e8a3a54247dc/volumes" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.106600 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc24334-4217-4656-9b38-281626334606" path="/var/lib/kubelet/pods/cfc24334-4217-4656-9b38-281626334606/volumes" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132583 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132661 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132700 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132786 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132828 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132849 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132872 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132899 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132921 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132938 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132959 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.132977 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133003 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133028 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133043 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133076 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133380 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.133663 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.134240 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.134475 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.138405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.139073 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.139705 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.141914 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.142749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.143287 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.145509 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.146616 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.148559 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.149114 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.154222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.155423 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.167278 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " pod="openstack/glance-default-external-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.171740 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.251106 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:30:56 crc kubenswrapper[4984]: I0130 10:30:56.261068 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:31:00 crc kubenswrapper[4984]: I0130 10:31:00.634665 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.000404 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.000718 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.000793 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.001679 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.001751 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796" gracePeriod=600 Jan 30 10:31:03 crc kubenswrapper[4984]: E0130 10:31:03.062193 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c1bd910_b683_42bf_966f_51a04ac18bd2.slice/crio-337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796.scope\": RecentStats: unable to find data in memory cache]" Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.577946 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796" exitCode=0 Jan 30 10:31:03 crc kubenswrapper[4984]: I0130 10:31:03.577983 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796"} Jan 30 10:31:05 crc kubenswrapper[4984]: I0130 10:31:05.636210 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:05 crc kubenswrapper[4984]: I0130 10:31:05.636928 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:31:08 crc kubenswrapper[4984]: E0130 10:31:08.552838 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 10:31:08 crc kubenswrapper[4984]: E0130 10:31:08.553507 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n594hb7h664h5d6h7fh9bh5cbh55dh58dh694h5fh59ch67bh645h57h64h5bh566h5fdh669h5c8h8ch5f4h649h67dhc6h5b4h68fh559h546h64ch5f5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwl5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59c58ffc9c-jj2qg_openstack(c1e19dda-69fc-437b-b42c-727c3cff3813): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:08 crc kubenswrapper[4984]: E0130 10:31:08.557464 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-59c58ffc9c-jj2qg" podUID="c1e19dda-69fc-437b-b42c-727c3cff3813" Jan 30 10:31:10 crc kubenswrapper[4984]: I0130 10:31:10.637888 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.009487 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.015449 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072179 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072307 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072354 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072400 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072426 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072521 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072566 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072607 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072640 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") pod \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\" (UID: \"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072658 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs" (OuterVolumeSpecName: "logs") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072701 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") pod \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\" (UID: \"ea17d23b-4f8b-425c-bc10-f6bd35f661bf\") " Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.072908 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts" (OuterVolumeSpecName: "scripts") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073815 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073406 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs" (OuterVolumeSpecName: "logs") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073503 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts" (OuterVolumeSpecName: "scripts") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073972 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data" (OuterVolumeSpecName: "config-data") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.073975 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data" (OuterVolumeSpecName: "config-data") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.079045 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.079150 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd" (OuterVolumeSpecName: "kube-api-access-4hnsd") pod "ea17d23b-4f8b-425c-bc10-f6bd35f661bf" (UID: "ea17d23b-4f8b-425c-bc10-f6bd35f661bf"). InnerVolumeSpecName "kube-api-access-4hnsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.084456 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d" (OuterVolumeSpecName: "kube-api-access-2ws2d") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "kube-api-access-2ws2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.086409 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" (UID: "4548afd2-be23-4ea7-a5a4-14b8fad4f5fa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175639 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175691 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ws2d\" (UniqueName: \"kubernetes.io/projected/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-kube-api-access-2ws2d\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175710 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175725 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175737 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175754 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175767 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hnsd\" (UniqueName: \"kubernetes.io/projected/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-kube-api-access-4hnsd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175779 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.175791 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea17d23b-4f8b-425c-bc10-f6bd35f661bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.692008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c95864f45-hf2gl" event={"ID":"ea17d23b-4f8b-425c-bc10-f6bd35f661bf","Type":"ContainerDied","Data":"a0f8ade3bd22a088d1a93fb53e8dade691d3e07b79328b0d3ec8e2f6f2ae944b"} Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.692045 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c95864f45-hf2gl" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.693477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c5bb97f77-vgk6b" event={"ID":"4548afd2-be23-4ea7-a5a4-14b8fad4f5fa","Type":"ContainerDied","Data":"fd71d27e335efabe9d15b9d9f84c4edb23e62f7a111c91aa79359a1602da77c4"} Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.693587 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c5bb97f77-vgk6b" Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.789291 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.801825 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c95864f45-hf2gl"] Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.817335 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:31:13 crc kubenswrapper[4984]: I0130 10:31:13.826098 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c5bb97f77-vgk6b"] Jan 30 10:31:14 crc kubenswrapper[4984]: I0130 10:31:14.102867 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4548afd2-be23-4ea7-a5a4-14b8fad4f5fa" path="/var/lib/kubelet/pods/4548afd2-be23-4ea7-a5a4-14b8fad4f5fa/volumes" Jan 30 10:31:14 crc kubenswrapper[4984]: I0130 10:31:14.118511 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea17d23b-4f8b-425c-bc10-f6bd35f661bf" path="/var/lib/kubelet/pods/ea17d23b-4f8b-425c-bc10-f6bd35f661bf/volumes" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.426154 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.426723 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8chzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-bfzdw_openstack(3048d738-67a2-417f-91ca-8993f4b557f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.428382 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-bfzdw" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.704113 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-bfzdw" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.946784 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.946967 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92h6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pxnz6_openstack(84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:14 crc kubenswrapper[4984]: E0130 10:31:14.948153 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pxnz6" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.038010 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.114905 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.114964 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115089 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115113 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115131 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.115201 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") pod \"90d82977-c98d-495c-bb24-89cbe285c74e\" (UID: \"90d82977-c98d-495c-bb24-89cbe285c74e\") " Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.138556 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6" (OuterVolumeSpecName: "kube-api-access-6j8l6") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "kube-api-access-6j8l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.157204 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.159710 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.160560 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.163078 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.165471 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config" (OuterVolumeSpecName: "config") pod "90d82977-c98d-495c-bb24-89cbe285c74e" (UID: "90d82977-c98d-495c-bb24-89cbe285c74e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217128 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j8l6\" (UniqueName: \"kubernetes.io/projected/90d82977-c98d-495c-bb24-89cbe285c74e-kube-api-access-6j8l6\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217186 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217199 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217212 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217224 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.217234 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90d82977-c98d-495c-bb24-89cbe285c74e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.639346 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.712573 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" event={"ID":"90d82977-c98d-495c-bb24-89cbe285c74e","Type":"ContainerDied","Data":"95f2327c3432ffa3640c01b2740a0df61c1a94375642a50247bc34eb3956031c"} Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.712695 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-kb757" Jan 30 10:31:15 crc kubenswrapper[4984]: E0130 10:31:15.714935 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pxnz6" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.766472 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:31:15 crc kubenswrapper[4984]: I0130 10:31:15.775400 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-kb757"] Jan 30 10:31:16 crc kubenswrapper[4984]: I0130 10:31:16.107141 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" path="/var/lib/kubelet/pods/90d82977-c98d-495c-bb24-89cbe285c74e/volumes" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.081989 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.085879 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.086313 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4q4x7_openstack(67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.087532 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4q4x7" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.100341 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160173 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160511 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160714 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160791 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160821 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160859 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160946 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.160990 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161091 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") pod \"c1e19dda-69fc-437b-b42c-727c3cff3813\" (UID: \"c1e19dda-69fc-437b-b42c-727c3cff3813\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161138 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") pod \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\" (UID: \"34fee7b8-8c52-498f-a9b2-ed2b18f555cc\") " Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts" (OuterVolumeSpecName: "scripts") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161790 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs" (OuterVolumeSpecName: "logs") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161803 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data" (OuterVolumeSpecName: "config-data") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161943 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e19dda-69fc-437b-b42c-727c3cff3813-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161961 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.161972 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1e19dda-69fc-437b-b42c-727c3cff3813-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.166094 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d" (OuterVolumeSpecName: "kube-api-access-wwl5d") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "kube-api-access-wwl5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.166522 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd" (OuterVolumeSpecName: "kube-api-access-j94kd") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "kube-api-access-j94kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.166982 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.167275 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.169071 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts" (OuterVolumeSpecName: "scripts") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.179532 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1e19dda-69fc-437b-b42c-727c3cff3813" (UID: "c1e19dda-69fc-437b-b42c-727c3cff3813"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.190034 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.192453 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data" (OuterVolumeSpecName: "config-data") pod "34fee7b8-8c52-498f-a9b2-ed2b18f555cc" (UID: "34fee7b8-8c52-498f-a9b2-ed2b18f555cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263609 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwl5d\" (UniqueName: \"kubernetes.io/projected/c1e19dda-69fc-437b-b42c-727c3cff3813-kube-api-access-wwl5d\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263651 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263666 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263677 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263694 4984 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263705 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1e19dda-69fc-437b-b42c-727c3cff3813-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263716 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j94kd\" (UniqueName: \"kubernetes.io/projected/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-kube-api-access-j94kd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.263728 4984 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34fee7b8-8c52-498f-a9b2-ed2b18f555cc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.488541 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.489092 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h659hcbh5d6h5c8h647h57bh677h5b7h88h69hdh84h7bh669h676hfbh5b6h688h657h88h5dbh5bfh96h686hbfh5d4h65fh64dh589h8fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nltq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(092048c5-1cfe-40c2-a319-23dde30a6c80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.514313 4984 scope.go:117] "RemoveContainer" containerID="ff10e3d16c25f992621611363b86d1d352e11c9a8d89ff10c52c4ce447ad2719" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.605428 4984 scope.go:117] "RemoveContainer" containerID="6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d" Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.607117 4984 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/glance-default-external-api-0_openstack_glance-httpd-6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d.log: no such file or directory" path="/var/log/containers/glance-default-external-api-0_openstack_glance-httpd-6cf57c18e317d78cad5aa0e3121df1cd0f9c03c552c801798eb4a38b3fdc705d.log" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.645286 4984 scope.go:117] "RemoveContainer" containerID="cb0b1f8568c079a5cb020f434aa1a3079fe39e6ebd3e15b1e885e67aa1ca4f7c" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.682154 4984 scope.go:117] "RemoveContainer" containerID="b82f1b85404dcbc5f9d8eadb3090c2c2ef0eb00b5fff0be477852b279a7e7b6e" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.739116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62mq2" event={"ID":"34fee7b8-8c52-498f-a9b2-ed2b18f555cc","Type":"ContainerDied","Data":"b68d87d9bb96723ca8797d2627af124dde7df989f62624faf45fa6f9a02a018d"} Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.739159 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b68d87d9bb96723ca8797d2627af124dde7df989f62624faf45fa6f9a02a018d" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.739312 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62mq2" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.742851 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59c58ffc9c-jj2qg" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.743065 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59c58ffc9c-jj2qg" event={"ID":"c1e19dda-69fc-437b-b42c-727c3cff3813","Type":"ContainerDied","Data":"9a4e15ad5e7e83774d55c10e4f5efd7b7cb63c50cf1ac1e695342968f871f85d"} Jan 30 10:31:17 crc kubenswrapper[4984]: E0130 10:31:17.765778 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4q4x7" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.772266 4984 scope.go:117] "RemoveContainer" containerID="7b80db8ea61f966304fe1cac1b2a27737f3b06cad7fdf3340f32093e994d9116" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.814788 4984 scope.go:117] "RemoveContainer" containerID="d68d99d8593f342738d78c0b5a8442f084e96aacd80060fc1757b47d9dea5bb4" Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.826990 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.833652 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59c58ffc9c-jj2qg"] Jan 30 10:31:17 crc kubenswrapper[4984]: I0130 10:31:17.947811 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.066559 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cb76cb6cb-wtx8d"] Jan 30 10:31:18 crc kubenswrapper[4984]: W0130 10:31:18.075675 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1c7d24e_f131_485d_aaec_80a94d7ddd96.slice/crio-258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb WatchSource:0}: Error finding container 258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb: Status 404 returned error can't find the container with id 258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.118852 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e19dda-69fc-437b-b42c-727c3cff3813" path="/var/lib/kubelet/pods/c1e19dda-69fc-437b-b42c-727c3cff3813/volumes" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.168632 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.174756 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-62mq2"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.273751 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:31:18 crc kubenswrapper[4984]: E0130 10:31:18.274093 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274106 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" Jan 30 10:31:18 crc kubenswrapper[4984]: E0130 10:31:18.274118 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="init" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274124 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="init" Jan 30 10:31:18 crc kubenswrapper[4984]: E0130 10:31:18.274143 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerName="keystone-bootstrap" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274152 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerName="keystone-bootstrap" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274356 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d82977-c98d-495c-bb24-89cbe285c74e" containerName="dnsmasq-dns" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.274377 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" containerName="keystone-bootstrap" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.276976 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281047 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281268 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281388 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281497 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.281776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.290177 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389562 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389635 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389809 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389874 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.389942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.390145 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.492816 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493110 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493314 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493503 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.493609 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.499115 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.499634 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.500834 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.505810 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.506316 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.520290 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"keystone-bootstrap-qb89x\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.599207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.764647 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerStarted","Data":"69bd05a6495e5cb7cdf4e1d3db592b4ecb95799d07ea2642a2cb5673af58d135"} Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.766513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb76cb6cb-wtx8d" event={"ID":"d1c7d24e-f131-485d-aaec-80a94d7ddd96","Type":"ContainerStarted","Data":"258a64267aa46e4c16dbc5e515c48e6ba4821ac8f3927652378149d00d65f2eb"} Jan 30 10:31:18 crc kubenswrapper[4984]: I0130 10:31:18.769514 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.066560 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:31:19 crc kubenswrapper[4984]: W0130 10:31:19.254289 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a91d1d_433e_415f_83f8_04185f2bae8e.slice/crio-81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b WatchSource:0}: Error finding container 81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b: Status 404 returned error can't find the container with id 81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.475447 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:31:19 crc kubenswrapper[4984]: W0130 10:31:19.495310 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ce38a2_070f_4aac_9495_d27d915c5ae1.slice/crio-0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4 WatchSource:0}: Error finding container 0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4: Status 404 returned error can't find the container with id 0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4 Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.809493 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerStarted","Data":"ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.809736 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerStarted","Data":"0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.809843 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.813655 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerStarted","Data":"81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.824656 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.830264 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qb89x" podStartSLOduration=1.8302334139999998 podStartE2EDuration="1.830233414s" podCreationTimestamp="2026-01-30 10:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:19.825855965 +0000 UTC m=+1184.392159789" watchObservedRunningTime="2026-01-30 10:31:19.830233414 +0000 UTC m=+1184.396537238" Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.842886 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb76cb6cb-wtx8d" event={"ID":"d1c7d24e-f131-485d-aaec-80a94d7ddd96","Type":"ContainerStarted","Data":"54296f32da4ae48ad1ffc63b9f73dfd3b05f17ecbf08112bb356c4210bf7eeba"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.854391 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerStarted","Data":"5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.854427 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerStarted","Data":"e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500"} Jan 30 10:31:19 crc kubenswrapper[4984]: I0130 10:31:19.883463 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b65cc758d-9hz7t" podStartSLOduration=33.841951689 podStartE2EDuration="34.883447724s" podCreationTimestamp="2026-01-30 10:30:45 +0000 UTC" firstStartedPulling="2026-01-30 10:31:17.94527295 +0000 UTC m=+1182.511576774" lastFinishedPulling="2026-01-30 10:31:18.986768985 +0000 UTC m=+1183.553072809" observedRunningTime="2026-01-30 10:31:19.880527525 +0000 UTC m=+1184.446831339" watchObservedRunningTime="2026-01-30 10:31:19.883447724 +0000 UTC m=+1184.449751548" Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.109414 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fee7b8-8c52-498f-a9b2-ed2b18f555cc" path="/var/lib/kubelet/pods/34fee7b8-8c52-498f-a9b2-ed2b18f555cc/volumes" Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.884236 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cb76cb6cb-wtx8d" event={"ID":"d1c7d24e-f131-485d-aaec-80a94d7ddd96","Type":"ContainerStarted","Data":"10fdf60d10e120f734c85f0a1581a5c707b14d2b9d601ff8411e37e16ff37617"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.887864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerStarted","Data":"b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.887902 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerStarted","Data":"8a1e7d08bcb7a1c10909d3b6f8549348ca67f5b537c84b6ec8529217335158a6"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.897351 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerStarted","Data":"26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09"} Jan 30 10:31:20 crc kubenswrapper[4984]: I0130 10:31:20.906869 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cb76cb6cb-wtx8d" podStartSLOduration=34.595054695 podStartE2EDuration="35.906854087s" podCreationTimestamp="2026-01-30 10:30:45 +0000 UTC" firstStartedPulling="2026-01-30 10:31:18.07809198 +0000 UTC m=+1182.644395804" lastFinishedPulling="2026-01-30 10:31:19.389891372 +0000 UTC m=+1183.956195196" observedRunningTime="2026-01-30 10:31:20.900839833 +0000 UTC m=+1185.467143657" watchObservedRunningTime="2026-01-30 10:31:20.906854087 +0000 UTC m=+1185.473157911" Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.905557 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerStarted","Data":"f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b"} Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.910097 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerStarted","Data":"01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d"} Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.933470 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.933451876 podStartE2EDuration="26.933451876s" podCreationTimestamp="2026-01-30 10:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:21.921712037 +0000 UTC m=+1186.488015861" watchObservedRunningTime="2026-01-30 10:31:21.933451876 +0000 UTC m=+1186.499755690" Jan 30 10:31:21 crc kubenswrapper[4984]: I0130 10:31:21.951308 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.951283533 podStartE2EDuration="26.951283533s" podCreationTimestamp="2026-01-30 10:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:21.945169006 +0000 UTC m=+1186.511472840" watchObservedRunningTime="2026-01-30 10:31:21.951283533 +0000 UTC m=+1186.517587377" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.484178 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.485904 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.559140 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.559312 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.953432 4984 generic.go:334] "Generic (PLEG): container finished" podID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerID="ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd" exitCode=0 Jan 30 10:31:25 crc kubenswrapper[4984]: I0130 10:31:25.953752 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerDied","Data":"ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd"} Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.251775 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.252142 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.252159 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.252172 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261424 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261481 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261673 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.261790 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.287672 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.289708 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.317213 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:31:26 crc kubenswrapper[4984]: I0130 10:31:26.336750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.145884 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303505 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303648 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303685 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303715 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303742 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.303798 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") pod \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\" (UID: \"e6ce38a2-070f-4aac-9495-d27d915c5ae1\") " Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.318109 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w" (OuterVolumeSpecName: "kube-api-access-ggl7w") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "kube-api-access-ggl7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.377342 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.377999 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts" (OuterVolumeSpecName: "scripts") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.405757 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggl7w\" (UniqueName: \"kubernetes.io/projected/e6ce38a2-070f-4aac-9495-d27d915c5ae1-kube-api-access-ggl7w\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.405800 4984 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.405813 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.415917 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.415980 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.416083 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data" (OuterVolumeSpecName: "config-data") pod "e6ce38a2-070f-4aac-9495-d27d915c5ae1" (UID: "e6ce38a2-070f-4aac-9495-d27d915c5ae1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.507577 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.507614 4984 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.507631 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce38a2-070f-4aac-9495-d27d915c5ae1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.991885 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb89x" event={"ID":"e6ce38a2-070f-4aac-9495-d27d915c5ae1","Type":"ContainerDied","Data":"0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4"} Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.991922 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a78446614bd64b28e5bc81c2ef1111174baf42b7a43f52b1a42caa28317dbd4" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.992028 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb89x" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.998809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:28 crc kubenswrapper[4984]: I0130 10:31:28.998917 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.154900 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.154997 4984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.157493 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.273057 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9fd9687b7-kdppr"] Jan 30 10:31:29 crc kubenswrapper[4984]: E0130 10:31:29.273424 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerName="keystone-bootstrap" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.273439 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerName="keystone-bootstrap" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.273600 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" containerName="keystone-bootstrap" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.274109 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.276882 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.278661 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.279092 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nsrjn" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.279346 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.279835 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.280161 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.291410 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fd9687b7-kdppr"] Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.310358 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422833 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-credential-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422891 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-combined-ca-bundle\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422929 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-public-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.422948 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-config-data\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423076 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zx82\" (UniqueName: \"kubernetes.io/projected/0cddf025-bb36-4984-82b8-360ab9f3d91c-kube-api-access-5zx82\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-internal-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423178 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-scripts\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.423404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-fernet-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.524848 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-public-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525177 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-config-data\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525205 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zx82\" (UniqueName: \"kubernetes.io/projected/0cddf025-bb36-4984-82b8-360ab9f3d91c-kube-api-access-5zx82\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525228 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-internal-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525264 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-scripts\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525356 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-fernet-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525424 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-credential-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.525462 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-combined-ca-bundle\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.530875 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-public-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.531644 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-scripts\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.541944 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-combined-ca-bundle\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.542808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-internal-tls-certs\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.542862 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-fernet-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.544935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-config-data\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.545839 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0cddf025-bb36-4984-82b8-360ab9f3d91c-credential-keys\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.549357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zx82\" (UniqueName: \"kubernetes.io/projected/0cddf025-bb36-4984-82b8-360ab9f3d91c-kube-api-access-5zx82\") pod \"keystone-9fd9687b7-kdppr\" (UID: \"0cddf025-bb36-4984-82b8-360ab9f3d91c\") " pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:29 crc kubenswrapper[4984]: I0130 10:31:29.592909 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:30 crc kubenswrapper[4984]: I0130 10:31:30.086893 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9fd9687b7-kdppr"] Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.011416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9687b7-kdppr" event={"ID":"0cddf025-bb36-4984-82b8-360ab9f3d91c","Type":"ContainerStarted","Data":"6232915df589444d72f77d65fcbb2851429349743e1cf5c2966d156f8cf417c1"} Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.011966 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9fd9687b7-kdppr" event={"ID":"0cddf025-bb36-4984-82b8-360ab9f3d91c","Type":"ContainerStarted","Data":"9f16e57b5268ca29551a4a9d6721691ca6c4503ade57184c5bcf50853ef3cbdb"} Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.011988 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:31:31 crc kubenswrapper[4984]: I0130 10:31:31.039057 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9fd9687b7-kdppr" podStartSLOduration=2.039023459 podStartE2EDuration="2.039023459s" podCreationTimestamp="2026-01-30 10:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:31.030739983 +0000 UTC m=+1195.597043817" watchObservedRunningTime="2026-01-30 10:31:31.039023459 +0000 UTC m=+1195.605327283" Jan 30 10:31:35 crc kubenswrapper[4984]: I0130 10:31:35.485711 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:31:35 crc kubenswrapper[4984]: I0130 10:31:35.562307 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cb76cb6cb-wtx8d" podUID="d1c7d24e-f131-485d-aaec-80a94d7ddd96" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.084982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.088592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerStarted","Data":"39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.090953 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerStarted","Data":"f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.093563 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerStarted","Data":"71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb"} Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.114318 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4q4x7" podStartSLOduration=2.870584614 podStartE2EDuration="1m1.114298118s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="2026-01-30 10:30:37.869962052 +0000 UTC m=+1142.436265876" lastFinishedPulling="2026-01-30 10:31:36.113675556 +0000 UTC m=+1200.679979380" observedRunningTime="2026-01-30 10:31:37.106841725 +0000 UTC m=+1201.673145559" watchObservedRunningTime="2026-01-30 10:31:37.114298118 +0000 UTC m=+1201.680601952" Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.127914 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pxnz6" podStartSLOduration=3.5824867559999998 podStartE2EDuration="1m1.127896059s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="2026-01-30 10:30:38.284155581 +0000 UTC m=+1142.850459395" lastFinishedPulling="2026-01-30 10:31:35.829564874 +0000 UTC m=+1200.395868698" observedRunningTime="2026-01-30 10:31:37.126130381 +0000 UTC m=+1201.692434205" watchObservedRunningTime="2026-01-30 10:31:37.127896059 +0000 UTC m=+1201.694199883" Jan 30 10:31:37 crc kubenswrapper[4984]: I0130 10:31:37.147217 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bfzdw" podStartSLOduration=3.373725286 podStartE2EDuration="1m1.147203295s" podCreationTimestamp="2026-01-30 10:30:36 +0000 UTC" firstStartedPulling="2026-01-30 10:30:38.060007932 +0000 UTC m=+1142.626311756" lastFinishedPulling="2026-01-30 10:31:35.833485901 +0000 UTC m=+1200.399789765" observedRunningTime="2026-01-30 10:31:37.143838603 +0000 UTC m=+1201.710142417" watchObservedRunningTime="2026-01-30 10:31:37.147203295 +0000 UTC m=+1201.713507109" Jan 30 10:31:47 crc kubenswrapper[4984]: I0130 10:31:47.353965 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:47 crc kubenswrapper[4984]: I0130 10:31:47.375772 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.050414 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cb76cb6cb-wtx8d" Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.131658 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.133207 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.207611 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" containerID="cri-o://e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500" gracePeriod=30 Jan 30 10:31:49 crc kubenswrapper[4984]: I0130 10:31:49.207767 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" containerID="cri-o://5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3" gracePeriod=30 Jan 30 10:31:51 crc kubenswrapper[4984]: E0130 10:31:51.794953 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233227 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerStarted","Data":"fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338"} Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233421 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" containerID="cri-o://fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338" gracePeriod=30 Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233468 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" containerID="cri-o://da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e" gracePeriod=30 Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233484 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" containerID="cri-o://46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e" gracePeriod=30 Jan 30 10:31:52 crc kubenswrapper[4984]: I0130 10:31:52.233691 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.244612 4984 generic.go:334] "Generic (PLEG): container finished" podID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerID="fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338" exitCode=0 Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.245007 4984 generic.go:334] "Generic (PLEG): container finished" podID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerID="46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e" exitCode=2 Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.244717 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338"} Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.245070 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e"} Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.247368 4984 generic.go:334] "Generic (PLEG): container finished" podID="1238c32f-7644-4b33-8960-b97c64733162" containerID="5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3" exitCode=0 Jan 30 10:31:53 crc kubenswrapper[4984]: I0130 10:31:53.247409 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerDied","Data":"5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3"} Jan 30 10:31:54 crc kubenswrapper[4984]: I0130 10:31:54.258629 4984 generic.go:334] "Generic (PLEG): container finished" podID="3048d738-67a2-417f-91ca-8993f4b557f1" containerID="f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732" exitCode=0 Jan 30 10:31:54 crc kubenswrapper[4984]: I0130 10:31:54.258741 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerDied","Data":"f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732"} Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.274305 4984 generic.go:334] "Generic (PLEG): container finished" podID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerID="da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e" exitCode=0 Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.274319 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e"} Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.276748 4984 generic.go:334] "Generic (PLEG): container finished" podID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerID="71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb" exitCode=0 Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.276797 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerDied","Data":"71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb"} Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.484579 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.520975 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618028 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618102 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618162 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618197 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618236 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618313 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618362 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") pod \"092048c5-1cfe-40c2-a319-23dde30a6c80\" (UID: \"092048c5-1cfe-40c2-a319-23dde30a6c80\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.618857 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.619903 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.620193 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.625595 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq" (OuterVolumeSpecName: "kube-api-access-2nltq") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "kube-api-access-2nltq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.630386 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts" (OuterVolumeSpecName: "scripts") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.653120 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.693297 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.713356 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data" (OuterVolumeSpecName: "config-data") pod "092048c5-1cfe-40c2-a319-23dde30a6c80" (UID: "092048c5-1cfe-40c2-a319-23dde30a6c80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.719856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.719908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720090 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720135 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720192 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") pod \"3048d738-67a2-417f-91ca-8993f4b557f1\" (UID: \"3048d738-67a2-417f-91ca-8993f4b557f1\") " Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720578 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720595 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720604 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720612 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nltq\" (UniqueName: \"kubernetes.io/projected/092048c5-1cfe-40c2-a319-23dde30a6c80-kube-api-access-2nltq\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720621 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720628 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/092048c5-1cfe-40c2-a319-23dde30a6c80-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.720638 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092048c5-1cfe-40c2-a319-23dde30a6c80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.721448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs" (OuterVolumeSpecName: "logs") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.723420 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts" (OuterVolumeSpecName: "scripts") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.723579 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc" (OuterVolumeSpecName: "kube-api-access-8chzc") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "kube-api-access-8chzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.744675 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.752257 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data" (OuterVolumeSpecName: "config-data") pod "3048d738-67a2-417f-91ca-8993f4b557f1" (UID: "3048d738-67a2-417f-91ca-8993f4b557f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822507 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8chzc\" (UniqueName: \"kubernetes.io/projected/3048d738-67a2-417f-91ca-8993f4b557f1-kube-api-access-8chzc\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822552 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822565 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822577 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3048d738-67a2-417f-91ca-8993f4b557f1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:55 crc kubenswrapper[4984]: I0130 10:31:55.822590 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3048d738-67a2-417f-91ca-8993f4b557f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.290375 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bfzdw" event={"ID":"3048d738-67a2-417f-91ca-8993f4b557f1","Type":"ContainerDied","Data":"8c1d0f7dc02303cf5bb0d029a247772d55790cec54bba645727d9dadb4e4bde2"} Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.290438 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1d0f7dc02303cf5bb0d029a247772d55790cec54bba645727d9dadb4e4bde2" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.290520 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bfzdw" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.303684 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.303925 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"092048c5-1cfe-40c2-a319-23dde30a6c80","Type":"ContainerDied","Data":"33596f8966073af30764199a2a914ff3e9f8caa7ca44b53b652d4e885a08aa2f"} Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.304027 4984 scope.go:117] "RemoveContainer" containerID="fe90a17e9eae41452a5c22bb09dfc1cdde1dd6ac8e8d06335f25c72d33e59338" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.407433 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.419868 4984 scope.go:117] "RemoveContainer" containerID="46fd22d1d38385651f110345a765172abb4953e7c8dd378404d60fa1d39abd4e" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.424362 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431088 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68474f84b8-6pzwt"] Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431498 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431516 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431530 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431536 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431552 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431558 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" Jan 30 10:31:56 crc kubenswrapper[4984]: E0130 10:31:56.431582 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" containerName="placement-db-sync" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431588 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" containerName="placement-db-sync" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431746 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="ceilometer-notification-agent" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431803 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="proxy-httpd" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431824 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" containerName="placement-db-sync" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.431842 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" containerName="sg-core" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.432801 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.435068 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.439766 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.439987 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.440144 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.440809 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-gnpsj" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.441767 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.443713 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.449499 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.449693 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.454433 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68474f84b8-6pzwt"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.476202 4984 scope.go:117] "RemoveContainer" containerID="da46ba433d841f53c30a3736ccad27d2bbec42b13d6f613d42551036c223d59e" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.488919 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538555 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbfxs\" (UniqueName: \"kubernetes.io/projected/34cd991a-90cf-410c-828d-db99caf6dcea-kube-api-access-nbfxs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538605 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538629 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-internal-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538678 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-public-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538702 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-config-data\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538782 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34cd991a-90cf-410c-828d-db99caf6dcea-logs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538873 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-scripts\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538908 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538946 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-combined-ca-bundle\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.538994 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652677 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-public-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652779 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-config-data\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652844 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652922 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652951 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34cd991a-90cf-410c-828d-db99caf6dcea-logs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.652996 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-scripts\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653073 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653113 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-combined-ca-bundle\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653198 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653290 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbfxs\" (UniqueName: \"kubernetes.io/projected/34cd991a-90cf-410c-828d-db99caf6dcea-kube-api-access-nbfxs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653822 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.653873 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-internal-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.659063 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.667554 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-config-data\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.668565 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.673265 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34cd991a-90cf-410c-828d-db99caf6dcea-logs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.675494 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.675819 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-scripts\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.680689 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-combined-ca-bundle\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.680991 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.681943 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.682294 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-internal-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.685913 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.691230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbfxs\" (UniqueName: \"kubernetes.io/projected/34cd991a-90cf-410c-828d-db99caf6dcea-kube-api-access-nbfxs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.695941 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34cd991a-90cf-410c-828d-db99caf6dcea-public-tls-certs\") pod \"placement-68474f84b8-6pzwt\" (UID: \"34cd991a-90cf-410c-828d-db99caf6dcea\") " pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.703082 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"ceilometer-0\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.728802 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.731206 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.772368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.886395 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.961892 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") pod \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.962021 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") pod \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.962064 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") pod \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\" (UID: \"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1\") " Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.969434 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" (UID: "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:56 crc kubenswrapper[4984]: I0130 10:31:56.969813 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j" (OuterVolumeSpecName: "kube-api-access-92h6j") pod "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" (UID: "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1"). InnerVolumeSpecName "kube-api-access-92h6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.043356 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" (UID: "84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.065070 4984 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.065233 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.065274 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92h6j\" (UniqueName: \"kubernetes.io/projected/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1-kube-api-access-92h6j\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.183792 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68474f84b8-6pzwt"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.351508 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:31:57 crc kubenswrapper[4984]: W0130 10:31:57.363133 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5ff484_b6c4_42ea_ae17_1b11c214f435.slice/crio-3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b WatchSource:0}: Error finding container 3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b: Status 404 returned error can't find the container with id 3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.364338 4984 generic.go:334] "Generic (PLEG): container finished" podID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerID="39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1" exitCode=0 Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.364423 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerDied","Data":"39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1"} Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.376704 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pxnz6" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.376719 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pxnz6" event={"ID":"84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1","Type":"ContainerDied","Data":"c150541fb16c40a06f1f4b6b64bfff01ebc0687acbccdc05a1f6c7f17f0d9920"} Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.376749 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c150541fb16c40a06f1f4b6b64bfff01ebc0687acbccdc05a1f6c7f17f0d9920" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.390530 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68474f84b8-6pzwt" event={"ID":"34cd991a-90cf-410c-828d-db99caf6dcea","Type":"ContainerStarted","Data":"64c0d3ab3a05e57909f45192a18006bb1483b72ac403a24ddf8b45b711cbbf43"} Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.499730 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-664bd6b5fc-shfjg"] Jan 30 10:31:57 crc kubenswrapper[4984]: E0130 10:31:57.500129 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerName="barbican-db-sync" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.500165 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerName="barbican-db-sync" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.500367 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" containerName="barbican-db-sync" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.501289 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.507345 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.507562 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dbvq5" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.507802 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.514211 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664bd6b5fc-shfjg"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7bs\" (UniqueName: \"kubernetes.io/projected/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-kube-api-access-kn7bs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580681 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-combined-ca-bundle\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580701 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-logs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580747 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data-custom\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.580767 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.586219 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-75ff98474b-zm29s"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.587574 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.593927 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.603691 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75ff98474b-zm29s"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.619807 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.621237 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.629029 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694524 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data-custom\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694550 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694597 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-combined-ca-bundle\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694656 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1368411d-c934-4d15-a67b-dc840dbe010d-logs\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694691 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7bs\" (UniqueName: \"kubernetes.io/projected/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-kube-api-access-kn7bs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xwx\" (UniqueName: \"kubernetes.io/projected/1368411d-c934-4d15-a67b-dc840dbe010d-kube-api-access-55xwx\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694737 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data-custom\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694773 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-combined-ca-bundle\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.694789 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-logs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.695405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-logs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.701654 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.702495 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-combined-ca-bundle\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.702868 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data-custom\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.703387 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.713584 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.720077 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-config-data\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.726422 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.744354 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7bs\" (UniqueName: \"kubernetes.io/projected/aa6393c8-34de-43fc-9a00-a0f87b31d8e8-kube-api-access-kn7bs\") pod \"barbican-worker-664bd6b5fc-shfjg\" (UID: \"aa6393c8-34de-43fc-9a00-a0f87b31d8e8\") " pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805409 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data-custom\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805480 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805561 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805668 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805716 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-combined-ca-bundle\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805761 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805852 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805890 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1368411d-c934-4d15-a67b-dc840dbe010d-logs\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805907 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805927 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805975 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.805999 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.806015 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.806035 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xwx\" (UniqueName: \"kubernetes.io/projected/1368411d-c934-4d15-a67b-dc840dbe010d-kube-api-access-55xwx\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.807384 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1368411d-c934-4d15-a67b-dc840dbe010d-logs\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.813152 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-combined-ca-bundle\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.817894 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data-custom\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.825890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1368411d-c934-4d15-a67b-dc840dbe010d-config-data\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.835038 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xwx\" (UniqueName: \"kubernetes.io/projected/1368411d-c934-4d15-a67b-dc840dbe010d-kube-api-access-55xwx\") pod \"barbican-keystone-listener-75ff98474b-zm29s\" (UID: \"1368411d-c934-4d15-a67b-dc840dbe010d\") " pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.907999 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908080 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908119 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908202 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908261 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908441 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908517 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908588 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908648 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.908672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.909415 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.910433 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.910939 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.911417 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.911459 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.911561 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.916820 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.919163 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.924776 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.927079 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"barbican-api-686dddff74-vgg85\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:57 crc kubenswrapper[4984]: I0130 10:31:57.930236 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"dnsmasq-dns-7c67bffd47-ldl9f\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.037897 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664bd6b5fc-shfjg" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.066173 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.083684 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.092808 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.104020 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092048c5-1cfe-40c2-a319-23dde30a6c80" path="/var/lib/kubelet/pods/092048c5-1cfe-40c2-a319-23dde30a6c80/volumes" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.409117 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68474f84b8-6pzwt" event={"ID":"34cd991a-90cf-410c-828d-db99caf6dcea","Type":"ContainerStarted","Data":"15f794d2d82829def16964874a24d8eba8ba29a4e58a6bb4e3e49826860e3b40"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.409521 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68474f84b8-6pzwt" event={"ID":"34cd991a-90cf-410c-828d-db99caf6dcea","Type":"ContainerStarted","Data":"1771be91f0f551b12866727a480e2180dc533c5cb1832ea0888f57d3c300d1ed"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.411206 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.411338 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.414778 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.414818 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b"} Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.457214 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68474f84b8-6pzwt" podStartSLOduration=2.457196187 podStartE2EDuration="2.457196187s" podCreationTimestamp="2026-01-30 10:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:58.445648193 +0000 UTC m=+1223.011952027" watchObservedRunningTime="2026-01-30 10:31:58.457196187 +0000 UTC m=+1223.023500011" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.603281 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664bd6b5fc-shfjg"] Jan 30 10:31:58 crc kubenswrapper[4984]: W0130 10:31:58.749849 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1368411d_c934_4d15_a67b_dc840dbe010d.slice/crio-efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14 WatchSource:0}: Error finding container efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14: Status 404 returned error can't find the container with id efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14 Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.755447 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75ff98474b-zm29s"] Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.765584 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.859408 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.866796 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:31:58 crc kubenswrapper[4984]: W0130 10:31:58.881537 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04477670_b6dd_441f_909a_e6b56bf335d5.slice/crio-a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631 WatchSource:0}: Error finding container a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631: Status 404 returned error can't find the container with id a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631 Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.942960 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943132 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943201 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943301 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943431 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943489 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.943513 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") pod \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\" (UID: \"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80\") " Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.944369 4984 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.947470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts" (OuterVolumeSpecName: "scripts") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.949129 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp" (OuterVolumeSpecName: "kube-api-access-nrxhp") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "kube-api-access-nrxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:31:58 crc kubenswrapper[4984]: I0130 10:31:58.951385 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.001769 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.021710 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data" (OuterVolumeSpecName: "config-data") pod "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" (UID: "67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045802 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045836 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrxhp\" (UniqueName: \"kubernetes.io/projected/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-kube-api-access-nrxhp\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045848 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045858 4984 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.045867 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.435714 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerStarted","Data":"8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.435971 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerStarted","Data":"ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.435981 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerStarted","Data":"a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.438076 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.438103 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.441304 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4q4x7" event={"ID":"67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80","Type":"ContainerDied","Data":"bac16c50dbc54a56989a819b2fb558872bce9cd29de279c380d506e6e46a94f0"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.441333 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac16c50dbc54a56989a819b2fb558872bce9cd29de279c380d506e6e46a94f0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.441341 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4q4x7" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.448569 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" event={"ID":"1368411d-c934-4d15-a67b-dc840dbe010d","Type":"ContainerStarted","Data":"efaf98c106eeacdfe49e3b1291a96438f842f320631c77485bbf992d96bc5d14"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.465215 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664bd6b5fc-shfjg" event={"ID":"aa6393c8-34de-43fc-9a00-a0f87b31d8e8","Type":"ContainerStarted","Data":"d99dde3d2e570bc78da84291eccae7818930b55d33c57c5d7b3f7af85875e25b"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.475439 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.475477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.476997 4984 generic.go:334] "Generic (PLEG): container finished" podID="db250c1d-d110-46f5-ae22-46a1e507a922" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" exitCode=0 Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.477586 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerDied","Data":"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.477641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerStarted","Data":"5804a7b25d440dcdbdeccb2de4750734b5adefbb2c8bb1ff519cee9eb3d7d6fa"} Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.504474 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-686dddff74-vgg85" podStartSLOduration=2.504455781 podStartE2EDuration="2.504455781s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:31:59.465066067 +0000 UTC m=+1224.031369901" watchObservedRunningTime="2026-01-30 10:31:59.504455781 +0000 UTC m=+1224.070759605" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.674425 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:31:59 crc kubenswrapper[4984]: E0130 10:31:59.674771 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerName="cinder-db-sync" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.674783 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerName="cinder-db-sync" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.674954 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" containerName="cinder-db-sync" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.675916 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684149 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684340 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-t4jkv" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684451 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.684567 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.693731 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.775313 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.818645 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.820264 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.837837 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870494 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870570 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870598 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870616 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870668 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.870710 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.972863 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.972967 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973187 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973233 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973285 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973321 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973343 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973392 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973423 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973445 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973479 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.973529 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.977895 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.985403 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.986126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.987643 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:31:59 crc kubenswrapper[4984]: I0130 10:31:59.987811 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.016360 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.018728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cinder-scheduler-0\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.020359 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.023972 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.034312 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.040418 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074710 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074760 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074795 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074835 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.074857 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.075789 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.075960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.076482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.077073 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.077114 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.113430 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"dnsmasq-dns-5cc8b5d5c5-8bwxw\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.174847 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.175925 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.175953 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176014 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176049 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176083 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176098 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.176141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277662 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277714 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277761 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277784 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277856 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.277910 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.278310 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.279219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.282493 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.282683 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.283070 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.289222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.296227 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"cinder-api-0\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " pod="openstack/cinder-api-0" Jan 30 10:32:00 crc kubenswrapper[4984]: I0130 10:32:00.454638 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:01 crc kubenswrapper[4984]: I0130 10:32:01.533997 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-9fd9687b7-kdppr" Jan 30 10:32:01 crc kubenswrapper[4984]: I0130 10:32:01.904242 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.009483 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:02 crc kubenswrapper[4984]: W0130 10:32:02.037293 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2a86d1a_4829_4934_83dd_b52dc378a4cf.slice/crio-748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4 WatchSource:0}: Error finding container 748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4: Status 404 returned error can't find the container with id 748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.137536 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:02 crc kubenswrapper[4984]: W0130 10:32:02.141305 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcde56acd_942d_47dd_8417_8c92170502ce.slice/crio-32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8 WatchSource:0}: Error finding container 32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8: Status 404 returned error can't find the container with id 32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.538300 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerStarted","Data":"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.538427 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.538438 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" containerID="cri-o://49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" gracePeriod=10 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.547412 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerStarted","Data":"5d408605319c89d081b5548ebdb4c7ea288ca2bdefa7e08a28be726765947e9d"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.553242 4984 generic.go:334] "Generic (PLEG): container finished" podID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" exitCode=0 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.553306 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerDied","Data":"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.553326 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerStarted","Data":"748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.562299 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" podStartSLOduration=5.562283151 podStartE2EDuration="5.562283151s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:02.557765278 +0000 UTC m=+1227.124069112" watchObservedRunningTime="2026-01-30 10:32:02.562283151 +0000 UTC m=+1227.128586975" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.566315 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerStarted","Data":"32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.589135 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" event={"ID":"1368411d-c934-4d15-a67b-dc840dbe010d","Type":"ContainerStarted","Data":"c1dc024a8a372c30c6e909e2090d9ed9d87971cc75d538f887f6f0dc53951197"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.589225 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" event={"ID":"1368411d-c934-4d15-a67b-dc840dbe010d","Type":"ContainerStarted","Data":"2407eb0ad66869e23ece625c09789b110449d66ee9e08012369a3b9f50b5e63f"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.603454 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664bd6b5fc-shfjg" event={"ID":"aa6393c8-34de-43fc-9a00-a0f87b31d8e8","Type":"ContainerStarted","Data":"2897b1ba156a11bb223869a25cfb36e7d9bb72cd81afd6033d9097d57b33c578"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.603498 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664bd6b5fc-shfjg" event={"ID":"aa6393c8-34de-43fc-9a00-a0f87b31d8e8","Type":"ContainerStarted","Data":"750f154a519d1c25a2cb7fa7537cd7133032f283828a1c85195f09c5b402de7a"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624537 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerStarted","Data":"3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca"} Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624682 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" containerID="cri-o://9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624860 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624900 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" containerID="cri-o://3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624940 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" containerID="cri-o://3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.624979 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" containerID="cri-o://96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00" gracePeriod=30 Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.664960 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-75ff98474b-zm29s" podStartSLOduration=3.093723461 podStartE2EDuration="5.664936939s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="2026-01-30 10:31:58.760979957 +0000 UTC m=+1223.327283781" lastFinishedPulling="2026-01-30 10:32:01.332193415 +0000 UTC m=+1225.898497259" observedRunningTime="2026-01-30 10:32:02.615007888 +0000 UTC m=+1227.181311712" watchObservedRunningTime="2026-01-30 10:32:02.664936939 +0000 UTC m=+1227.231240763" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.666961 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-664bd6b5fc-shfjg" podStartSLOduration=2.922804882 podStartE2EDuration="5.666957134s" podCreationTimestamp="2026-01-30 10:31:57 +0000 UTC" firstStartedPulling="2026-01-30 10:31:58.618587236 +0000 UTC m=+1223.184891060" lastFinishedPulling="2026-01-30 10:32:01.362739478 +0000 UTC m=+1225.929043312" observedRunningTime="2026-01-30 10:32:02.656092588 +0000 UTC m=+1227.222396412" watchObservedRunningTime="2026-01-30 10:32:02.666957134 +0000 UTC m=+1227.233260958" Jan 30 10:32:02 crc kubenswrapper[4984]: I0130 10:32:02.713463 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.717112622 podStartE2EDuration="6.713444911s" podCreationTimestamp="2026-01-30 10:31:56 +0000 UTC" firstStartedPulling="2026-01-30 10:31:57.365885444 +0000 UTC m=+1221.932189268" lastFinishedPulling="2026-01-30 10:32:01.362217733 +0000 UTC m=+1225.928521557" observedRunningTime="2026-01-30 10:32:02.693659962 +0000 UTC m=+1227.259963786" watchObservedRunningTime="2026-01-30 10:32:02.713444911 +0000 UTC m=+1227.279748735" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.034003 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166419 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166531 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166575 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166641 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166679 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.166700 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") pod \"db250c1d-d110-46f5-ae22-46a1e507a922\" (UID: \"db250c1d-d110-46f5-ae22-46a1e507a922\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.176882 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv" (OuterVolumeSpecName: "kube-api-access-4hjwv") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "kube-api-access-4hjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.238621 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.269743 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjwv\" (UniqueName: \"kubernetes.io/projected/db250c1d-d110-46f5-ae22-46a1e507a922-kube-api-access-4hjwv\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.269783 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.275451 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.284497 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config" (OuterVolumeSpecName: "config") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.291656 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.335480 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db250c1d-d110-46f5-ae22-46a1e507a922" (UID: "db250c1d-d110-46f5-ae22-46a1e507a922"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.355147 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.355783 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.355801 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.355817 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="init" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.355822 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="init" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.356122 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" containerName="dnsmasq-dns" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.356844 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.373831 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.375720 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vkmk4" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.375919 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.376150 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379333 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379378 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379389 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.379397 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db250c1d-d110-46f5-ae22-46a1e507a922-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492343 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-combined-ca-bundle\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492463 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492495 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config-secret\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.492638 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj67\" (UniqueName: \"kubernetes.io/projected/141e094b-e8c8-4a61-b93c-8dec5ac89823-kube-api-access-gcj67\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594272 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-combined-ca-bundle\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594396 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594433 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config-secret\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.594530 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcj67\" (UniqueName: \"kubernetes.io/projected/141e094b-e8c8-4a61-b93c-8dec5ac89823-kube-api-access-gcj67\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.598457 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.608060 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-openstack-config-secret\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.609390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e094b-e8c8-4a61-b93c-8dec5ac89823-combined-ca-bundle\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.612933 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcj67\" (UniqueName: \"kubernetes.io/projected/141e094b-e8c8-4a61-b93c-8dec5ac89823-kube-api-access-gcj67\") pod \"openstackclient\" (UID: \"141e094b-e8c8-4a61-b93c-8dec5ac89823\") " pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657184 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657281 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631" exitCode=2 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657289 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657296 4984 generic.go:334] "Generic (PLEG): container finished" podID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerID="9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657373 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657468 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657512 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657524 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657538 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f5ff484-b6c4-42ea-ae17-1b11c214f435","Type":"ContainerDied","Data":"3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.657549 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3021defde28ee628905bde58f2aaea4fc9fa442953bdae4373a39ad7b2faf56b" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669854 4984 generic.go:334] "Generic (PLEG): container finished" podID="db250c1d-d110-46f5-ae22-46a1e507a922" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" exitCode=0 Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669943 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerDied","Data":"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669968 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" event={"ID":"db250c1d-d110-46f5-ae22-46a1e507a922","Type":"ContainerDied","Data":"5804a7b25d440dcdbdeccb2de4750734b5adefbb2c8bb1ff519cee9eb3d7d6fa"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.669985 4984 scope.go:117] "RemoveContainer" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.670111 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-ldl9f" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.676334 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerStarted","Data":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.685187 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerStarted","Data":"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273"} Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.685869 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.686702 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.716495 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" podStartSLOduration=4.716476519 podStartE2EDuration="4.716476519s" podCreationTimestamp="2026-01-30 10:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:03.708159113 +0000 UTC m=+1228.274462937" watchObservedRunningTime="2026-01-30 10:32:03.716476519 +0000 UTC m=+1228.282780343" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.741673 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.744297 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.755647 4984 scope.go:117] "RemoveContainer" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.759731 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.766776 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-ldl9f"] Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797474 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797681 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797713 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797782 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.797875 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.798010 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") pod \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\" (UID: \"3f5ff484-b6c4-42ea-ae17-1b11c214f435\") " Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.801218 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.802338 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.803592 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6" (OuterVolumeSpecName: "kube-api-access-kgvw6") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "kube-api-access-kgvw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.804505 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts" (OuterVolumeSpecName: "scripts") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.804548 4984 scope.go:117] "RemoveContainer" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.805031 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228\": container with ID starting with 49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228 not found: ID does not exist" containerID="49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.805074 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228"} err="failed to get container status \"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228\": rpc error: code = NotFound desc = could not find container \"49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228\": container with ID starting with 49cdc151d88339ebb206d62f50c9cb57f5ad46d17573dbc8ab935f9b3c964228 not found: ID does not exist" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.805101 4984 scope.go:117] "RemoveContainer" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" Jan 30 10:32:03 crc kubenswrapper[4984]: E0130 10:32:03.805420 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37\": container with ID starting with 3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37 not found: ID does not exist" containerID="3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.805449 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37"} err="failed to get container status \"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37\": rpc error: code = NotFound desc = could not find container \"3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37\": container with ID starting with 3eefc25341fc0d4ed291f29c6a82f05dfc1e64760ee2cad12c2e73b0b28c0c37 not found: ID does not exist" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902396 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902678 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902763 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f5ff484-b6c4-42ea-ae17-1b11c214f435-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.902836 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgvw6\" (UniqueName: \"kubernetes.io/projected/3f5ff484-b6c4-42ea-ae17-1b11c214f435-kube-api-access-kgvw6\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.903634 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.903883 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:03 crc kubenswrapper[4984]: I0130 10:32:03.934799 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data" (OuterVolumeSpecName: "config-data") pod "3f5ff484-b6c4-42ea-ae17-1b11c214f435" (UID: "3f5ff484-b6c4-42ea-ae17-1b11c214f435"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.004509 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.004893 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.004911 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f5ff484-b6c4-42ea-ae17-1b11c214f435-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.108384 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db250c1d-d110-46f5-ae22-46a1e507a922" path="/var/lib/kubelet/pods/db250c1d-d110-46f5-ae22-46a1e507a922/volumes" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.386419 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.473801 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cfd8d5fd8-lwgk4"] Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474174 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474193 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474207 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474214 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474230 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474237 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" Jan 30 10:32:04 crc kubenswrapper[4984]: E0130 10:32:04.474276 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474283 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474469 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-central-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474491 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="sg-core" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474505 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="ceilometer-notification-agent" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.474519 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" containerName="proxy-httpd" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.475419 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.477766 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.478224 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.515083 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cfd8d5fd8-lwgk4"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.619809 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-internal-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620160 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx446\" (UniqueName: \"kubernetes.io/projected/217935e2-7a1e-44a6-b6fd-e64c41155d6d-kube-api-access-vx446\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620186 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data-custom\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620256 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620432 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-combined-ca-bundle\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620601 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217935e2-7a1e-44a6-b6fd-e64c41155d6d-logs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.620851 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-public-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.697912 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"141e094b-e8c8-4a61-b93c-8dec5ac89823","Type":"ContainerStarted","Data":"36554c84eaa2f6d62c6a0a85214521f8ab2e6261e30a6786f151de4f6c895299"} Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.701842 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerStarted","Data":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.702011 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" containerID="cri-o://46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" gracePeriod=30 Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.702587 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" containerID="cri-o://88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" gracePeriod=30 Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.702743 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.711464 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerStarted","Data":"1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e"} Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.711556 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725088 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217935e2-7a1e-44a6-b6fd-e64c41155d6d-logs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725282 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-public-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725369 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-internal-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx446\" (UniqueName: \"kubernetes.io/projected/217935e2-7a1e-44a6-b6fd-e64c41155d6d-kube-api-access-vx446\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725616 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data-custom\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725693 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.725744 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-combined-ca-bundle\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.726680 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/217935e2-7a1e-44a6-b6fd-e64c41155d6d-logs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.729309 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.729292643 podStartE2EDuration="5.729292643s" podCreationTimestamp="2026-01-30 10:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:04.728664676 +0000 UTC m=+1229.294968500" watchObservedRunningTime="2026-01-30 10:32:04.729292643 +0000 UTC m=+1229.295596467" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.743323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-combined-ca-bundle\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.743751 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-internal-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.746509 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data-custom\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.747051 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-config-data\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.747746 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx446\" (UniqueName: \"kubernetes.io/projected/217935e2-7a1e-44a6-b6fd-e64c41155d6d-kube-api-access-vx446\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.782809 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217935e2-7a1e-44a6-b6fd-e64c41155d6d-public-tls-certs\") pod \"barbican-api-6cfd8d5fd8-lwgk4\" (UID: \"217935e2-7a1e-44a6-b6fd-e64c41155d6d\") " pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.797428 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.812040 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.828224 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.834684 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.836794 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.839138 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.859867 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.882036 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960460 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960656 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960682 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960759 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:04 crc kubenswrapper[4984]: I0130 10:32:04.960784 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063682 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063842 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063868 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063901 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063936 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.063953 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.064006 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.064907 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.069037 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.073831 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.075839 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.081692 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.083170 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.091029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"ceilometer-0\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.206073 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.400354 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cfd8d5fd8-lwgk4"] Jan 30 10:32:05 crc kubenswrapper[4984]: W0130 10:32:05.480167 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod217935e2_7a1e_44a6_b6fd_e64c41155d6d.slice/crio-f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e WatchSource:0}: Error finding container f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e: Status 404 returned error can't find the container with id f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.484215 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.536947 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.579813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580733 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580885 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.580916 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.581054 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.581122 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") pod \"856d75b5-d459-46da-99d3-123ebe89a26d\" (UID: \"856d75b5-d459-46da-99d3-123ebe89a26d\") " Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.583160 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs" (OuterVolumeSpecName: "logs") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.583867 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.598011 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts" (OuterVolumeSpecName: "scripts") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.599445 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8" (OuterVolumeSpecName: "kube-api-access-fsrk8") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "kube-api-access-fsrk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.612642 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.633221 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.663350 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data" (OuterVolumeSpecName: "config-data") pod "856d75b5-d459-46da-99d3-123ebe89a26d" (UID: "856d75b5-d459-46da-99d3-123ebe89a26d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684240 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsrk8\" (UniqueName: \"kubernetes.io/projected/856d75b5-d459-46da-99d3-123ebe89a26d-kube-api-access-fsrk8\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684298 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684314 4984 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/856d75b5-d459-46da-99d3-123ebe89a26d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684322 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684331 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684759 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856d75b5-d459-46da-99d3-123ebe89a26d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.684912 4984 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/856d75b5-d459-46da-99d3-123ebe89a26d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742651 4984 generic.go:334] "Generic (PLEG): container finished" podID="856d75b5-d459-46da-99d3-123ebe89a26d" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" exitCode=0 Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742681 4984 generic.go:334] "Generic (PLEG): container finished" podID="856d75b5-d459-46da-99d3-123ebe89a26d" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" exitCode=143 Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742727 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerDied","Data":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerDied","Data":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742762 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"856d75b5-d459-46da-99d3-123ebe89a26d","Type":"ContainerDied","Data":"5d408605319c89d081b5548ebdb4c7ea288ca2bdefa7e08a28be726765947e9d"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742777 4984 scope.go:117] "RemoveContainer" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.742895 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.757579 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" event={"ID":"217935e2-7a1e-44a6-b6fd-e64c41155d6d","Type":"ContainerStarted","Data":"f3264032bb1cf0d01ca06d61d78c0db545c6f8424f8c1ae6ca67e1e940cea80e"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.774330 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.786715 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerStarted","Data":"26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef"} Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.789313 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.800939 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.811083 4984 scope.go:117] "RemoveContainer" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815079 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.815566 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815585 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.815603 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815611 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815835 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api-log" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.815859 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" containerName="cinder-api" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.817467 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.821898 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.954574742 podStartE2EDuration="6.821876301s" podCreationTimestamp="2026-01-30 10:31:59 +0000 UTC" firstStartedPulling="2026-01-30 10:32:02.153910161 +0000 UTC m=+1226.720213985" lastFinishedPulling="2026-01-30 10:32:03.02121172 +0000 UTC m=+1227.587515544" observedRunningTime="2026-01-30 10:32:05.816802343 +0000 UTC m=+1230.383106167" watchObservedRunningTime="2026-01-30 10:32:05.821876301 +0000 UTC m=+1230.388180125" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.823767 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.824126 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.827740 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.858889 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.875060 4984 scope.go:117] "RemoveContainer" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.883412 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": container with ID starting with 88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6 not found: ID does not exist" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883454 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} err="failed to get container status \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": rpc error: code = NotFound desc = could not find container \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": container with ID starting with 88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883480 4984 scope.go:117] "RemoveContainer" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: E0130 10:32:05.883813 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": container with ID starting with 46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0 not found: ID does not exist" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883870 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} err="failed to get container status \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": rpc error: code = NotFound desc = could not find container \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": container with ID starting with 46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.883897 4984 scope.go:117] "RemoveContainer" containerID="88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.884289 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6"} err="failed to get container status \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": rpc error: code = NotFound desc = could not find container \"88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6\": container with ID starting with 88da86467aef75467cdeae7f2a03061db67efda258256a37a8ee8a10e8e2e0c6 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.884601 4984 scope.go:117] "RemoveContainer" containerID="46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.888389 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0"} err="failed to get container status \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": rpc error: code = NotFound desc = could not find container \"46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0\": container with ID starting with 46d31506a14dcbb326f4a5c76135a63ad91fe0fc3531e770d2782fa330ae35c0 not found: ID does not exist" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894362 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8d6abba-9a6d-4a99-a68b-659c1e111893-logs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894475 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8d6abba-9a6d-4a99-a68b-659c1e111893-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data-custom\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894604 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8xt\" (UniqueName: \"kubernetes.io/projected/a8d6abba-9a6d-4a99-a68b-659c1e111893-kube-api-access-4h8xt\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894648 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-scripts\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894709 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894742 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.894776 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996630 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8xt\" (UniqueName: \"kubernetes.io/projected/a8d6abba-9a6d-4a99-a68b-659c1e111893-kube-api-access-4h8xt\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996667 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-scripts\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996704 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996725 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996757 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996805 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8d6abba-9a6d-4a99-a68b-659c1e111893-logs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996819 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996875 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8d6abba-9a6d-4a99-a68b-659c1e111893-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.996896 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data-custom\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.998001 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8d6abba-9a6d-4a99-a68b-659c1e111893-logs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:05 crc kubenswrapper[4984]: I0130 10:32:05.998775 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8d6abba-9a6d-4a99-a68b-659c1e111893-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.001771 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data-custom\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.002331 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.004710 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-config-data\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.004788 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.013006 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-scripts\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.013620 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d6abba-9a6d-4a99-a68b-659c1e111893-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.030331 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8xt\" (UniqueName: \"kubernetes.io/projected/a8d6abba-9a6d-4a99-a68b-659c1e111893-kube-api-access-4h8xt\") pod \"cinder-api-0\" (UID: \"a8d6abba-9a6d-4a99-a68b-659c1e111893\") " pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.112498 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5ff484-b6c4-42ea-ae17-1b11c214f435" path="/var/lib/kubelet/pods/3f5ff484-b6c4-42ea-ae17-1b11c214f435/volumes" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.113640 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856d75b5-d459-46da-99d3-123ebe89a26d" path="/var/lib/kubelet/pods/856d75b5-d459-46da-99d3-123ebe89a26d/volumes" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.160905 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.639324 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.802284 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.802333 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"38638789db9d29c3ef911b6de9f957b454b4ebfc3c25c50089b77550538df8d3"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.803837 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a8d6abba-9a6d-4a99-a68b-659c1e111893","Type":"ContainerStarted","Data":"07efaa1e3ddf1f9bed633f866e97d57d0a62927318cdc459f3e2924950215643"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.808508 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" event={"ID":"217935e2-7a1e-44a6-b6fd-e64c41155d6d","Type":"ContainerStarted","Data":"2d9b322a61d7e7703203e888c055e0be29b8fc16e5677e90111c7155f707dadd"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.808539 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" event={"ID":"217935e2-7a1e-44a6-b6fd-e64c41155d6d","Type":"ContainerStarted","Data":"2a040a81aefde9307ef906faa88a260b742855fb6023e6f2987ffdd59efd5fa8"} Jan 30 10:32:06 crc kubenswrapper[4984]: I0130 10:32:06.838197 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" podStartSLOduration=2.838178901 podStartE2EDuration="2.838178901s" podCreationTimestamp="2026-01-30 10:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:06.826708118 +0000 UTC m=+1231.393011952" watchObservedRunningTime="2026-01-30 10:32:06.838178901 +0000 UTC m=+1231.404482725" Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.830769 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a8d6abba-9a6d-4a99-a68b-659c1e111893","Type":"ContainerStarted","Data":"98af5f2d73575000638d0a726277b59148b003349f5e37c0cf0923dfe0121884"} Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.833525 4984 generic.go:334] "Generic (PLEG): container finished" podID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerID="886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529" exitCode=0 Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.833578 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerDied","Data":"886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529"} Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.836717 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5"} Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.836788 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:07 crc kubenswrapper[4984]: I0130 10:32:07.836821 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:08 crc kubenswrapper[4984]: I0130 10:32:08.845832 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a8d6abba-9a6d-4a99-a68b-659c1e111893","Type":"ContainerStarted","Data":"bd242f1bd39cc553d6bcdfef5b3dfa3a5cac86dad44d22bc1b8164f58c4f7dc3"} Jan 30 10:32:08 crc kubenswrapper[4984]: I0130 10:32:08.847322 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 10:32:08 crc kubenswrapper[4984]: I0130 10:32:08.851048 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b"} Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.232727 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.255716 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.255695436 podStartE2EDuration="4.255695436s" podCreationTimestamp="2026-01-30 10:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:08.868732679 +0000 UTC m=+1233.435036513" watchObservedRunningTime="2026-01-30 10:32:09.255695436 +0000 UTC m=+1233.821999270" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.264974 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") pod \"2405c6ec-2510-4786-a602-ae85d358ed1f\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.265300 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") pod \"2405c6ec-2510-4786-a602-ae85d358ed1f\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.265505 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") pod \"2405c6ec-2510-4786-a602-ae85d358ed1f\" (UID: \"2405c6ec-2510-4786-a602-ae85d358ed1f\") " Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.271080 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj" (OuterVolumeSpecName: "kube-api-access-75vlj") pod "2405c6ec-2510-4786-a602-ae85d358ed1f" (UID: "2405c6ec-2510-4786-a602-ae85d358ed1f"). InnerVolumeSpecName "kube-api-access-75vlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.307852 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2405c6ec-2510-4786-a602-ae85d358ed1f" (UID: "2405c6ec-2510-4786-a602-ae85d358ed1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.311383 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config" (OuterVolumeSpecName: "config") pod "2405c6ec-2510-4786-a602-ae85d358ed1f" (UID: "2405c6ec-2510-4786-a602-ae85d358ed1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.366688 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vlj\" (UniqueName: \"kubernetes.io/projected/2405c6ec-2510-4786-a602-ae85d358ed1f-kube-api-access-75vlj\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.366727 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.366737 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2405c6ec-2510-4786-a602-ae85d358ed1f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.534125 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.865417 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5hx59" event={"ID":"2405c6ec-2510-4786-a602-ae85d358ed1f","Type":"ContainerDied","Data":"b22cfaa6ea4686fc0571245806e8e06ec7680b75dec3155d20471ab3af1337c6"} Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.865777 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22cfaa6ea4686fc0571245806e8e06ec7680b75dec3155d20471ab3af1337c6" Jan 30 10:32:09 crc kubenswrapper[4984]: I0130 10:32:09.865472 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5hx59" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.051041 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.159809 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.166882 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.167174 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" containerID="cri-o://b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" gracePeriod=10 Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.169502 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.205403 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:32:10 crc kubenswrapper[4984]: E0130 10:32:10.205816 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerName="neutron-db-sync" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.205834 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerName="neutron-db-sync" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.205993 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" containerName="neutron-db-sync" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.206989 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.289753 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.380036 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.390215 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.394797 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.394957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.394988 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.395010 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.395040 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.395479 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.401730 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.410732 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.411131 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.411446 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.411752 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7t44v" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498599 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498638 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498660 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498685 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498735 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498765 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498793 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498812 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498832 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498850 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.498888 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.500007 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.502974 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.503515 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.504357 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.505000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.509641 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.528312 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"dnsmasq-dns-6578955fd5-k7xgx\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600542 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600784 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600828 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.600864 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.605641 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.609432 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.609542 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.610324 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.619779 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"neutron-599cd9b588-9ll76\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.745556 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.828473 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.894911 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.905026 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerStarted","Data":"bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8"} Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.907321 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.930857 4984 generic.go:334] "Generic (PLEG): container finished" podID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" exitCode=0 Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.931785 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.931916 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerDied","Data":"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273"} Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.935705 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw" event={"ID":"d2a86d1a-4829-4934-83dd-b52dc378a4cf","Type":"ContainerDied","Data":"748555b8a734b5492519c14029befb43f9a48dbcf5b004ff2684095fa68c51f4"} Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.935844 4984 scope.go:117] "RemoveContainer" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.976952 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.264728106 podStartE2EDuration="6.976936618s" podCreationTimestamp="2026-01-30 10:32:04 +0000 UTC" firstStartedPulling="2026-01-30 10:32:05.823685131 +0000 UTC m=+1230.389988955" lastFinishedPulling="2026-01-30 10:32:09.535893643 +0000 UTC m=+1234.102197467" observedRunningTime="2026-01-30 10:32:10.945610514 +0000 UTC m=+1235.511914338" watchObservedRunningTime="2026-01-30 10:32:10.976936618 +0000 UTC m=+1235.543240442" Jan 30 10:32:10 crc kubenswrapper[4984]: I0130 10:32:10.999736 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.008915 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009027 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009055 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009096 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009121 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.009257 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") pod \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\" (UID: \"d2a86d1a-4829-4934-83dd-b52dc378a4cf\") " Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.011137 4984 scope.go:117] "RemoveContainer" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.028484 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm" (OuterVolumeSpecName: "kube-api-access-jdxzm") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "kube-api-access-jdxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.064919 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.068579 4984 scope.go:117] "RemoveContainer" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" Jan 30 10:32:11 crc kubenswrapper[4984]: E0130 10:32:11.069372 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273\": container with ID starting with b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273 not found: ID does not exist" containerID="b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.069415 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273"} err="failed to get container status \"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273\": rpc error: code = NotFound desc = could not find container \"b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273\": container with ID starting with b2e982e43694b050dd2105a53e5400098a8d618da232dea4621e36a6b9539273 not found: ID does not exist" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.069440 4984 scope.go:117] "RemoveContainer" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" Jan 30 10:32:11 crc kubenswrapper[4984]: E0130 10:32:11.071147 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec\": container with ID starting with e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec not found: ID does not exist" containerID="e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.071187 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec"} err="failed to get container status \"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec\": rpc error: code = NotFound desc = could not find container \"e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec\": container with ID starting with e20b6d7b279721faeea65b2170e079973b7496ec0d5d483cdcc6933c0b7b77ec not found: ID does not exist" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.080755 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.087297 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.092759 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config" (OuterVolumeSpecName: "config") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.105134 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2a86d1a-4829-4934-83dd-b52dc378a4cf" (UID: "d2a86d1a-4829-4934-83dd-b52dc378a4cf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113378 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113434 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113446 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113455 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdxzm\" (UniqueName: \"kubernetes.io/projected/d2a86d1a-4829-4934-83dd-b52dc378a4cf-kube-api-access-jdxzm\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113484 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.113495 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2a86d1a-4829-4934-83dd-b52dc378a4cf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.273367 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.284420 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-8bwxw"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.421666 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.447704 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.939928 4984 generic.go:334] "Generic (PLEG): container finished" podID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerID="56bcdf99f2e8704de387e7830f17377f1640401317904a569f1e4bd023c74298" exitCode=0 Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.940151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerDied","Data":"56bcdf99f2e8704de387e7830f17377f1640401317904a569f1e4bd023c74298"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.940300 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerStarted","Data":"944e20dd436d6475eadb44cdbbb965933e9f99f6729e1968115646dd8b334bc1"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952373 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" containerID="cri-o://1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e" gracePeriod=30 Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952671 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerStarted","Data":"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952709 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerStarted","Data":"726ba7faaff55c103e2271e253ff0f17293623696cf0c95eaae899332787dccc"} Jan 30 10:32:11 crc kubenswrapper[4984]: I0130 10:32:11.952887 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" containerID="cri-o://26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef" gracePeriod=30 Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.109999 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" path="/var/lib/kubelet/pods/d2a86d1a-4829-4934-83dd-b52dc378a4cf/volumes" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.136367 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5565c8d7-xqnh6"] Jan 30 10:32:12 crc kubenswrapper[4984]: E0130 10:32:12.136869 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="init" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.136891 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="init" Jan 30 10:32:12 crc kubenswrapper[4984]: E0130 10:32:12.136918 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.136926 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.137142 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a86d1a-4829-4934-83dd-b52dc378a4cf" containerName="dnsmasq-dns" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.138368 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.142855 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.148237 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.163057 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5565c8d7-xqnh6"] Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242290 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-public-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242339 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-internal-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242424 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242490 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-httpd-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242546 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-ovndb-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242621 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvqb\" (UniqueName: \"kubernetes.io/projected/0e442774-b2c1-418a-a5b2-edfd20f23c27-kube-api-access-8hvqb\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.242637 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-combined-ca-bundle\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.273003 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77f6d8f475-hmb99"] Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.276870 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.279816 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.280205 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.280292 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.297530 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f6d8f475-hmb99"] Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.343874 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-ovndb-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344231 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hvqb\" (UniqueName: \"kubernetes.io/projected/0e442774-b2c1-418a-a5b2-edfd20f23c27-kube-api-access-8hvqb\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-combined-ca-bundle\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344414 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-public-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344442 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-internal-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344485 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.344549 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-httpd-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.405993 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-public-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.408444 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-ovndb-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.410691 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-combined-ca-bundle\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.412690 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.416668 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hvqb\" (UniqueName: \"kubernetes.io/projected/0e442774-b2c1-418a-a5b2-edfd20f23c27-kube-api-access-8hvqb\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.417749 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-httpd-config\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.425371 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e442774-b2c1-418a-a5b2-edfd20f23c27-internal-tls-certs\") pod \"neutron-5565c8d7-xqnh6\" (UID: \"0e442774-b2c1-418a-a5b2-edfd20f23c27\") " pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446392 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-log-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446450 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9g9b\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-kube-api-access-l9g9b\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446487 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-internal-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446543 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-combined-ca-bundle\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446570 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-public-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-config-data\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446618 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-run-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.446674 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-etc-swift\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.484538 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549178 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-run-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549264 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-etc-swift\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549373 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-log-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549389 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9g9b\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-kube-api-access-l9g9b\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549417 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-internal-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549443 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-combined-ca-bundle\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549463 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-public-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.549479 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-config-data\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.550015 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-run-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.575661 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a88ca399-adf6-4df4-8216-84de7603712b-log-httpd\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.576392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-internal-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.577221 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-combined-ca-bundle\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.580327 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-etc-swift\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.583660 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-public-tls-certs\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.584483 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9g9b\" (UniqueName: \"kubernetes.io/projected/a88ca399-adf6-4df4-8216-84de7603712b-kube-api-access-l9g9b\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.586498 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a88ca399-adf6-4df4-8216-84de7603712b-config-data\") pod \"swift-proxy-77f6d8f475-hmb99\" (UID: \"a88ca399-adf6-4df4-8216-84de7603712b\") " pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.834064 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.963139 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerStarted","Data":"98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0"} Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.966881 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerStarted","Data":"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08"} Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.967017 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.968903 4984 generic.go:334] "Generic (PLEG): container finished" podID="cde56acd-942d-47dd-8417-8c92170502ce" containerID="26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef" exitCode=0 Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.969814 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerDied","Data":"26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef"} Jan 30 10:32:12 crc kubenswrapper[4984]: I0130 10:32:12.997800 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" podStartSLOduration=2.99777891 podStartE2EDuration="2.99777891s" podCreationTimestamp="2026-01-30 10:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:12.985470938 +0000 UTC m=+1237.551774752" watchObservedRunningTime="2026-01-30 10:32:12.99777891 +0000 UTC m=+1237.564082744" Jan 30 10:32:13 crc kubenswrapper[4984]: I0130 10:32:13.016166 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-599cd9b588-9ll76" podStartSLOduration=3.016147465 podStartE2EDuration="3.016147465s" podCreationTimestamp="2026-01-30 10:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:13.004713407 +0000 UTC m=+1237.571017221" watchObservedRunningTime="2026-01-30 10:32:13.016147465 +0000 UTC m=+1237.582451289" Jan 30 10:32:13 crc kubenswrapper[4984]: I0130 10:32:13.978450 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.291142 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293659 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" containerID="cri-o://2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293704 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" containerID="cri-o://bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293737 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" containerID="cri-o://945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.293737 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" containerID="cri-o://0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5" gracePeriod=30 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990014 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8" exitCode=0 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990056 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b" exitCode=2 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990071 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5" exitCode=0 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990081 4984 generic.go:334] "Generic (PLEG): container finished" podID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerID="2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307" exitCode=0 Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990096 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8"} Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990148 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b"} Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5"} Jan 30 10:32:14 crc kubenswrapper[4984]: I0130 10:32:14.990171 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307"} Jan 30 10:32:15 crc kubenswrapper[4984]: I0130 10:32:15.486043 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6b65cc758d-9hz7t" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 30 10:32:15 crc kubenswrapper[4984]: I0130 10:32:15.486183 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.003038 4984 generic.go:334] "Generic (PLEG): container finished" podID="cde56acd-942d-47dd-8417-8c92170502ce" containerID="1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e" exitCode=0 Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.003125 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerDied","Data":"1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e"} Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.336218 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.683035 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cfd8d5fd8-lwgk4" Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.771220 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.771447 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" containerID="cri-o://ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36" gracePeriod=30 Jan 30 10:32:16 crc kubenswrapper[4984]: I0130 10:32:16.772049 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" containerID="cri-o://8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607" gracePeriod=30 Jan 30 10:32:17 crc kubenswrapper[4984]: I0130 10:32:17.023407 4984 generic.go:334] "Generic (PLEG): container finished" podID="04477670-b6dd-441f-909a-e6b56bf335d5" containerID="ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36" exitCode=143 Jan 30 10:32:17 crc kubenswrapper[4984]: I0130 10:32:17.023477 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerDied","Data":"ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36"} Jan 30 10:32:18 crc kubenswrapper[4984]: I0130 10:32:18.527374 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.664118 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.665115 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" containerID="cri-o://b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540" gracePeriod=30 Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.665508 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" containerID="cri-o://f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b" gracePeriod=30 Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.947841 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:36198->10.217.0.162:9311: read: connection reset by peer" Jan 30 10:32:19 crc kubenswrapper[4984]: I0130 10:32:19.948171 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-686dddff74-vgg85" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:36194->10.217.0.162:9311: read: connection reset by peer" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.023038 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.069453 4984 generic.go:334] "Generic (PLEG): container finished" podID="1238c32f-7644-4b33-8960-b97c64733162" containerID="e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500" exitCode=137 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.069766 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerDied","Data":"e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.083629 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.083684 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cde56acd-942d-47dd-8417-8c92170502ce","Type":"ContainerDied","Data":"32c3d1db71e8a13d68fda5ef78f6c6bd587624b6835f4d6b0c34dcb6f2a6bda8"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.083756 4984 scope.go:117] "RemoveContainer" containerID="26329e646fcc2df43f4de6f45cb4ff62828435c82bb875ccdec438d9125ba3ef" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.100290 4984 generic.go:334] "Generic (PLEG): container finished" podID="04477670-b6dd-441f-909a-e6b56bf335d5" containerID="8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607" exitCode=0 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.107182 4984 generic.go:334] "Generic (PLEG): container finished" podID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerID="b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540" exitCode=143 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.129923 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerDied","Data":"8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.129964 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerDied","Data":"b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540"} Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.144574 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.145485 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.164925 4984 scope.go:117] "RemoveContainer" containerID="1369dd6fa40d6cf6785def4f85a6f78018eda50088ff42cf352345db7e62485e" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201439 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201543 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201632 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201708 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201804 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.201829 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") pod \"cde56acd-942d-47dd-8417-8c92170502ce\" (UID: \"cde56acd-942d-47dd-8417-8c92170502ce\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.204526 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.211663 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.213431 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts" (OuterVolumeSpecName: "scripts") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.213766 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq" (OuterVolumeSpecName: "kube-api-access-4ggvq") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "kube-api-access-4ggvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.294624 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.307817 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.307869 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.307936 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308646 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308750 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308804 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308837 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309071 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309096 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309121 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309196 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309217 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") pod \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\" (UID: \"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309267 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") pod \"1238c32f-7644-4b33-8960-b97c64733162\" (UID: \"1238c32f-7644-4b33-8960-b97c64733162\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309676 4984 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cde56acd-942d-47dd-8417-8c92170502ce-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309694 4984 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309703 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309712 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.309720 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggvq\" (UniqueName: \"kubernetes.io/projected/cde56acd-942d-47dd-8417-8c92170502ce-kube-api-access-4ggvq\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.308651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.312854 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs" (OuterVolumeSpecName: "logs") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.313515 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts" (OuterVolumeSpecName: "scripts") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.313552 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.321411 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq" (OuterVolumeSpecName: "kube-api-access-hsxfq") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "kube-api-access-hsxfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.321827 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.329978 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx" (OuterVolumeSpecName: "kube-api-access-4zvlx") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "kube-api-access-4zvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.346712 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data" (OuterVolumeSpecName: "config-data") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.360420 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts" (OuterVolumeSpecName: "scripts") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.372403 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.384323 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.388195 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f6d8f475-hmb99"] Jan 30 10:32:20 crc kubenswrapper[4984]: W0130 10:32:20.390691 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda88ca399_adf6_4df4_8216_84de7603712b.slice/crio-38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4 WatchSource:0}: Error finding container 38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4: Status 404 returned error can't find the container with id 38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4 Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.397299 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414579 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxfq\" (UniqueName: \"kubernetes.io/projected/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-kube-api-access-hsxfq\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414621 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1238c32f-7644-4b33-8960-b97c64733162-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414631 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414640 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414648 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414656 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414665 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414674 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvlx\" (UniqueName: \"kubernetes.io/projected/1238c32f-7644-4b33-8960-b97c64733162-kube-api-access-4zvlx\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414683 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414692 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1238c32f-7644-4b33-8960-b97c64733162-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.414700 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.415187 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data" (OuterVolumeSpecName: "config-data") pod "cde56acd-942d-47dd-8417-8c92170502ce" (UID: "cde56acd-942d-47dd-8417-8c92170502ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.437881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1238c32f-7644-4b33-8960-b97c64733162" (UID: "1238c32f-7644-4b33-8960-b97c64733162"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.461618 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.487093 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data" (OuterVolumeSpecName: "config-data") pod "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" (UID: "c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516181 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516298 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516320 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516527 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516556 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") pod \"04477670-b6dd-441f-909a-e6b56bf335d5\" (UID: \"04477670-b6dd-441f-909a-e6b56bf335d5\") " Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516904 4984 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1238c32f-7644-4b33-8960-b97c64733162-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516917 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde56acd-942d-47dd-8417-8c92170502ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516926 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.516933 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.520515 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8" (OuterVolumeSpecName: "kube-api-access-jmcx8") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "kube-api-access-jmcx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.521038 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs" (OuterVolumeSpecName: "logs") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.527527 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.561778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.584309 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data" (OuterVolumeSpecName: "config-data") pod "04477670-b6dd-441f-909a-e6b56bf335d5" (UID: "04477670-b6dd-441f-909a-e6b56bf335d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593295 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593638 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593654 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593667 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593673 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593686 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593692 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593701 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593706 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593718 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593724 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593737 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593743 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593754 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593761 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593771 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593776 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593787 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593793 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" Jan 30 10:32:20 crc kubenswrapper[4984]: E0130 10:32:20.593805 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593810 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593970 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.593980 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="proxy-httpd" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594050 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594065 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="sg-core" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594073 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1238c32f-7644-4b33-8960-b97c64733162" containerName="horizon-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594081 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" containerName="barbican-api-log" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594087 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="probe" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594099 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-central-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594106 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde56acd-942d-47dd-8417-8c92170502ce" containerName="cinder-scheduler" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594114 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" containerName="ceilometer-notification-agent" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.594641 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618117 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04477670-b6dd-441f-909a-e6b56bf335d5-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618158 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618169 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618178 4984 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04477670-b6dd-441f-909a-e6b56bf335d5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.618187 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcx8\" (UniqueName: \"kubernetes.io/projected/04477670-b6dd-441f-909a-e6b56bf335d5-kube-api-access-jmcx8\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.642085 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.696151 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.697444 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.704238 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.719364 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.719529 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.740279 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.750103 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.760616 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.762016 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.764072 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.792904 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.814656 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.815878 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.819161 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821588 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821624 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.821697 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.822537 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.834033 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.839064 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.841293 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"nova-api-db-create-7vrp9\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925442 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925876 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbkk\" (UniqueName: \"kubernetes.io/projected/4ced7140-d346-43c7-9139-7f460af079e2-kube-api-access-qnbkk\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925914 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.925965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926004 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926580 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ced7140-d346-43c7-9139-7f460af079e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926641 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.926846 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.928392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.930693 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.930894 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.944227 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"nova-cell0-db-create-qs8g9\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.963699 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.980325 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:32:20 crc kubenswrapper[4984]: I0130 10:32:20.982510 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.022588 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.022816 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" containerID="cri-o://f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" gracePeriod=10 Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.032924 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ced7140-d346-43c7-9139-7f460af079e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.032963 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.032994 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033091 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033130 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbkk\" (UniqueName: \"kubernetes.io/projected/4ced7140-d346-43c7-9139-7f460af079e2-kube-api-access-qnbkk\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033171 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.033192 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.036154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ced7140-d346-43c7-9139-7f460af079e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.036894 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.040955 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.050978 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.053482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"nova-api-9847-account-create-update-p46tr\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: W0130 10:32:21.053728 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e442774_b2c1_418a_a5b2_edfd20f23c27.slice/crio-e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c WatchSource:0}: Error finding container e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c: Status 404 returned error can't find the container with id e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.054823 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbkk\" (UniqueName: \"kubernetes.io/projected/4ced7140-d346-43c7-9139-7f460af079e2-kube-api-access-qnbkk\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.056118 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.056781 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.057438 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.058371 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.059184 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ced7140-d346-43c7-9139-7f460af079e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ced7140-d346-43c7-9139-7f460af079e2\") " pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.059719 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.070078 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.072156 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5565c8d7-xqnh6"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.083186 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.125524 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"141e094b-e8c8-4a61-b93c-8dec5ac89823","Type":"ContainerStarted","Data":"dfe2def501ea9b4238bae8b67e193723236fcc6fdce3113dd8be629b8c86ffc3"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.130514 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b65cc758d-9hz7t" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.130555 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b65cc758d-9hz7t" event={"ID":"1238c32f-7644-4b33-8960-b97c64733162","Type":"ContainerDied","Data":"69bd05a6495e5cb7cdf4e1d3db592b4ecb95799d07ea2642a2cb5673af58d135"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.130593 4984 scope.go:117] "RemoveContainer" containerID="5493ade86936da3c95621d2f2b00875678dfc7dae927f605f1bcf9035e6196e3" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.133638 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f6d8f475-hmb99" event={"ID":"a88ca399-adf6-4df4-8216-84de7603712b","Type":"ContainerStarted","Data":"eea68d359f72674ff8e2398b4f8bdd404622b91454d36c4c7f4f4f8ec7687657"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.133673 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f6d8f475-hmb99" event={"ID":"a88ca399-adf6-4df4-8216-84de7603712b","Type":"ContainerStarted","Data":"38aab2f34b741facacb894313a9e0ed1f396db233e4899a60dcd8a85041c0cd4"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.135024 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.135095 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.150029 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-686dddff74-vgg85" event={"ID":"04477670-b6dd-441f-909a-e6b56bf335d5","Type":"ContainerDied","Data":"a25f262112b6e85807259ba24dced766ff4543ab38949ac83106e59f485a1631"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.150069 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-686dddff74-vgg85" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.151954 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.773506978 podStartE2EDuration="18.151932828s" podCreationTimestamp="2026-01-30 10:32:03 +0000 UTC" firstStartedPulling="2026-01-30 10:32:04.490084584 +0000 UTC m=+1229.056388408" lastFinishedPulling="2026-01-30 10:32:19.868510434 +0000 UTC m=+1244.434814258" observedRunningTime="2026-01-30 10:32:21.146800329 +0000 UTC m=+1245.713104163" watchObservedRunningTime="2026-01-30 10:32:21.151932828 +0000 UTC m=+1245.718236642" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.188767 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5565c8d7-xqnh6" event={"ID":"0e442774-b2c1-418a-a5b2-edfd20f23c27","Type":"ContainerStarted","Data":"e7a2847c527c57f9285377dc7aef600061e22c381151ad6b1f951f89ceabf05c"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.221346 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.224048 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.231381 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f","Type":"ContainerDied","Data":"38638789db9d29c3ef911b6de9f957b454b4ebfc3c25c50089b77550538df8d3"} Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.231491 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.237320 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6b65cc758d-9hz7t"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238356 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238398 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238430 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.238666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.241229 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.244601 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.248026 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.255213 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.260053 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.261534 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"nova-cell1-db-create-xjhtp\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.266995 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.303372 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.322819 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.334284 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-686dddff74-vgg85"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.341210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.341427 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.342201 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.360646 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"nova-cell0-f837-account-create-update-tljj4\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.360711 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.376469 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.378878 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.381525 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.384788 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.385629 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.388012 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.444268 4984 scope.go:117] "RemoveContainer" containerID="e266f18121a096f3fe3e49d05abb63a2d173ba4f6fec027f0c56354304bc3500" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.446406 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.446575 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.461365 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548334 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548693 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548755 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548809 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548886 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548926 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.548957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.549011 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.549032 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.549810 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.586803 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"nova-cell1-32c7-account-create-update-2mdsq\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.591066 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651107 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651208 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651256 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651303 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651321 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.651369 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.652495 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.653610 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.656000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.656613 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.657619 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.658217 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.663508 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.681671 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"ceilometer-0\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.718867 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.743542 4984 scope.go:117] "RemoveContainer" containerID="8c43c6ece75902af1224a2ddeee2440c861acd7173cc6619e7ad3179c3ca2607" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.859807 4984 scope.go:117] "RemoveContainer" containerID="ef2b44c9cc58a38cd274a3a43e5055e0b19698690cd171579657a5e817e39d36" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.885683 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.898224 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:32:21 crc kubenswrapper[4984]: I0130 10:32:21.912838 4984 scope.go:117] "RemoveContainer" containerID="bfa2ab18047e37a688c09f810f48fe15666e9db7c479446800a197ca2ebab7f8" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.017835 4984 scope.go:117] "RemoveContainer" containerID="945d8053cf4f55a96474daed1e2a95d74352b9409dd6e79655670085a0a0059b" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.059186 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.059269 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.059291 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.062728 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.062841 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.062968 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") pod \"17f579b7-9f28-42f6-a7be-b7c562962f19\" (UID: \"17f579b7-9f28-42f6-a7be-b7c562962f19\") " Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.078430 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl" (OuterVolumeSpecName: "kube-api-access-qjqvl") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "kube-api-access-qjqvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.111514 4984 scope.go:117] "RemoveContainer" containerID="0aa74f5e3bc92260f19b7d1adc4c7bf1b868d3d2ad619cc44a4cb4d9f84692e5" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.124561 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config" (OuterVolumeSpecName: "config") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.142293 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04477670-b6dd-441f-909a-e6b56bf335d5" path="/var/lib/kubelet/pods/04477670-b6dd-441f-909a-e6b56bf335d5/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.142987 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1238c32f-7644-4b33-8960-b97c64733162" path="/var/lib/kubelet/pods/1238c32f-7644-4b33-8960-b97c64733162/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.143989 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f" path="/var/lib/kubelet/pods/c0fa729b-8bb8-4d04-8b0f-b8f2f785a60f/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.144901 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde56acd-942d-47dd-8417-8c92170502ce" path="/var/lib/kubelet/pods/cde56acd-942d-47dd-8417-8c92170502ce/volumes" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.172094 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjqvl\" (UniqueName: \"kubernetes.io/projected/17f579b7-9f28-42f6-a7be-b7c562962f19-kube-api-access-qjqvl\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.172385 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.217010 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.223471 4984 scope.go:117] "RemoveContainer" containerID="2b6fbf81ccaeadc6f40196c9545cf384a0725f0968c807284fafa7d78761b307" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.271427 4984 generic.go:334] "Generic (PLEG): container finished" podID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" exitCode=0 Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.272210 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.272199 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerDied","Data":"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.272287 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-qkbrd" event={"ID":"17f579b7-9f28-42f6-a7be-b7c562962f19","Type":"ContainerDied","Data":"46705135c8b4ffe6cd8ae6b8808eab3201b22696187a5edc015badc4d1a286b1"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.294758 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5565c8d7-xqnh6" event={"ID":"0e442774-b2c1-418a-a5b2-edfd20f23c27","Type":"ContainerStarted","Data":"0701fcca5a69f41053b9d3d51870bbb16794337006362efe4e3e24f51cc6c3ec"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.296013 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qs8g9" event={"ID":"0b0be8dd-7b50-43e1-b223-8d5082a0c499","Type":"ContainerStarted","Data":"bbefe8ce4510fe7b158f92ce0dc00c30ab21b7eef1680bc50647aa0b28cbef5d"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.300312 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.300325 4984 scope.go:117] "RemoveContainer" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.313725 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.316810 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f6d8f475-hmb99" event={"ID":"a88ca399-adf6-4df4-8216-84de7603712b","Type":"ContainerStarted","Data":"dbe38b0ed08be70a90d19d5ca66d6f7fb98d8d3cb6fe3e0187d277c5e27088c3"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.321611 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.321653 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.331769 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.342206 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.346708 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerStarted","Data":"55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.346811 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerStarted","Data":"1c5848b548d217213f89febcccd25bcde269e5553708abc872d8390746a63bbb"} Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.353862 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17f579b7-9f28-42f6-a7be-b7c562962f19" (UID: "17f579b7-9f28-42f6-a7be-b7c562962f19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.371011 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.387291 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77f6d8f475-hmb99" podStartSLOduration=10.387226594 podStartE2EDuration="10.387226594s" podCreationTimestamp="2026-01-30 10:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:22.359363152 +0000 UTC m=+1246.925666976" watchObservedRunningTime="2026-01-30 10:32:22.387226594 +0000 UTC m=+1246.953530418" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.403822 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.406643 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.406701 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17f579b7-9f28-42f6-a7be-b7c562962f19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.430598 4984 scope.go:117] "RemoveContainer" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.445489 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7vrp9" podStartSLOduration=2.445458905 podStartE2EDuration="2.445458905s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:22.378767606 +0000 UTC m=+1246.945071430" watchObservedRunningTime="2026-01-30 10:32:22.445458905 +0000 UTC m=+1247.011762719" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.685276 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.694537 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698231 4984 scope.go:117] "RemoveContainer" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" Jan 30 10:32:22 crc kubenswrapper[4984]: E0130 10:32:22.698665 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3\": container with ID starting with f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3 not found: ID does not exist" containerID="f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698688 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3"} err="failed to get container status \"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3\": rpc error: code = NotFound desc = could not find container \"f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3\": container with ID starting with f7b6a7dbb50a335176f8ac9b168625f66857c789d712dd90641aeac987dcd3f3 not found: ID does not exist" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698709 4984 scope.go:117] "RemoveContainer" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" Jan 30 10:32:22 crc kubenswrapper[4984]: E0130 10:32:22.698929 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d\": container with ID starting with 2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d not found: ID does not exist" containerID="2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.698947 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d"} err="failed to get container status \"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d\": rpc error: code = NotFound desc = could not find container \"2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d\": container with ID starting with 2a4bb7b9a60412da71acae1a0f7bd9a6ebd380ee6bf02f3f2c6ee76a0a1f761d not found: ID does not exist" Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.715654 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.727228 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.741135 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-qkbrd"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.805497 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:22 crc kubenswrapper[4984]: I0130 10:32:22.904015 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.416956 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerStarted","Data":"b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.417201 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerStarted","Data":"e81cf1f9e79d8b70d2e235029d59bacdc97bdc433a06d9e6b5e9ac828ea06bcf"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.420800 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"c0c4c822948d363ec832d915082d1e20bbbbcf4ed4ee70954c08c129b901a0b2"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.439766 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" podStartSLOduration=2.43974878 podStartE2EDuration="2.43974878s" podCreationTimestamp="2026-01-30 10:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.439571465 +0000 UTC m=+1248.005875289" watchObservedRunningTime="2026-01-30 10:32:23.43974878 +0000 UTC m=+1248.006052604" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.460610 4984 generic.go:334] "Generic (PLEG): container finished" podID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerID="f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b" exitCode=0 Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.460666 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerDied","Data":"f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.466495 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerStarted","Data":"9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.466536 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerStarted","Data":"575515e274160feb6211a43f20905827d4c5fe15a6ff35e9c803951a2e985f46"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.475475 4984 generic.go:334] "Generic (PLEG): container finished" podID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerID="55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d" exitCode=0 Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.475833 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerDied","Data":"55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.504223 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-f837-account-create-update-tljj4" podStartSLOduration=3.504203428 podStartE2EDuration="3.504203428s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.494313682 +0000 UTC m=+1248.060617506" watchObservedRunningTime="2026-01-30 10:32:23.504203428 +0000 UTC m=+1248.070507252" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.508965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerStarted","Data":"73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.509007 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerStarted","Data":"8ef80d8fdaf645dd8d2bdf1957a895428b93f7cc3fbc4ac309bedad93fb31c93"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.525754 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerStarted","Data":"6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.525802 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerStarted","Data":"fa33540a290efc7162b768b57b3dd915005ebb7fab7039dcf2d2739115fcb47c"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.528772 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ced7140-d346-43c7-9139-7f460af079e2","Type":"ContainerStarted","Data":"440b0f3902c394e1b1df1eaa1c9d17747a052729fcb06b831cd00ac93d764dfe"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.555829 4984 generic.go:334] "Generic (PLEG): container finished" podID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerID="f9b3187c82aff853cf22b0038f5d38d1cea29bfe3a85c99f377ce27a24d35342" exitCode=0 Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.555920 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qs8g9" event={"ID":"0b0be8dd-7b50-43e1-b223-8d5082a0c499","Type":"ContainerDied","Data":"f9b3187c82aff853cf22b0038f5d38d1cea29bfe3a85c99f377ce27a24d35342"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.557426 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xjhtp" podStartSLOduration=3.557404853 podStartE2EDuration="3.557404853s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.548623876 +0000 UTC m=+1248.114927690" watchObservedRunningTime="2026-01-30 10:32:23.557404853 +0000 UTC m=+1248.123708677" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.564592 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5565c8d7-xqnh6" event={"ID":"0e442774-b2c1-418a-a5b2-edfd20f23c27","Type":"ContainerStarted","Data":"14fd681a6599f8e55f3b67ab760e8fc1db1dc3e02014cd45efc644931448963a"} Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.592054 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5565c8d7-xqnh6" podStartSLOduration=11.592034977 podStartE2EDuration="11.592034977s" podCreationTimestamp="2026-01-30 10:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:23.589136309 +0000 UTC m=+1248.155440133" watchObservedRunningTime="2026-01-30 10:32:23.592034977 +0000 UTC m=+1248.158338801" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.760388 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.975849 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976418 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976465 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976606 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976647 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976803 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976842 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.976878 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") pod \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\" (UID: \"5b0932ca-60dc-45f3-96ed-e8a9c6040375\") " Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.977981 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.978587 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs" (OuterVolumeSpecName: "logs") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.981921 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.988564 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2" (OuterVolumeSpecName: "kube-api-access-r9ql2") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "kube-api-access-r9ql2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:23 crc kubenswrapper[4984]: I0130 10:32:23.994135 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts" (OuterVolumeSpecName: "scripts") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.043478 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.058384 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data" (OuterVolumeSpecName: "config-data") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.070832 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5b0932ca-60dc-45f3-96ed-e8a9c6040375" (UID: "5b0932ca-60dc-45f3-96ed-e8a9c6040375"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081615 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081653 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9ql2\" (UniqueName: \"kubernetes.io/projected/5b0932ca-60dc-45f3-96ed-e8a9c6040375-kube-api-access-r9ql2\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081664 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081673 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081683 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b0932ca-60dc-45f3-96ed-e8a9c6040375-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081715 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081725 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.081733 4984 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b0932ca-60dc-45f3-96ed-e8a9c6040375-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.103045 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" path="/var/lib/kubelet/pods/17f579b7-9f28-42f6-a7be-b7c562962f19/volumes" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.105352 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.183293 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.587021 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ced7140-d346-43c7-9139-7f460af079e2","Type":"ContainerStarted","Data":"226cde834bd61cb47219e223cc386de57f67df31fc05d2712714e74c56daeb00"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.591647 4984 generic.go:334] "Generic (PLEG): container finished" podID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerID="b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.591833 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerDied","Data":"b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.596774 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b0932ca-60dc-45f3-96ed-e8a9c6040375","Type":"ContainerDied","Data":"8a1e7d08bcb7a1c10909d3b6f8549348ca67f5b537c84b6ec8529217335158a6"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.596830 4984 scope.go:117] "RemoveContainer" containerID="f5d2c684f725898702f9b307b8ca9f6269deea78615a6a0c69ae6a71f84efa6b" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.597482 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.598768 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.602010 4984 generic.go:334] "Generic (PLEG): container finished" podID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerID="9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.602562 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerDied","Data":"9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.606379 4984 generic.go:334] "Generic (PLEG): container finished" podID="24e68f06-af93-45d0-bf19-26469cac41f1" containerID="6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.606571 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerDied","Data":"6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.624034 4984 generic.go:334] "Generic (PLEG): container finished" podID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerID="73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081" exitCode=0 Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.624299 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerDied","Data":"73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081"} Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.625186 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.639891 4984 scope.go:117] "RemoveContainer" containerID="b0bd86874350f63b8748ae8967e83266af0a39fdbb0fb9e72891b79c28551540" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.660311 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.676678 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699436 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699791 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699804 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699824 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699830 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699842 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="init" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699850 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="init" Jan 30 10:32:24 crc kubenswrapper[4984]: E0130 10:32:24.699865 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.699871 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.700318 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f579b7-9f28-42f6-a7be-b7c562962f19" containerName="dnsmasq-dns" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.700338 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-log" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.700345 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" containerName="glance-httpd" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.701315 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.705089 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.705205 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.716610 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.901723 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902012 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902037 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhc5\" (UniqueName: \"kubernetes.io/projected/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-kube-api-access-9lhc5\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902165 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902230 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902281 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:24 crc kubenswrapper[4984]: I0130 10:32:24.902446 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004402 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004447 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004476 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004555 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhc5\" (UniqueName: \"kubernetes.io/projected/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-kube-api-access-9lhc5\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004585 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004649 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004672 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.004832 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-logs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.005587 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.006390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.018292 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.018295 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.021212 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhc5\" (UniqueName: \"kubernetes.io/projected/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-kube-api-access-9lhc5\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.025629 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.038440 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bc5a16-54a8-4008-98ea-3adb9b24e9fa-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.067198 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"96bc5a16-54a8-4008-98ea-3adb9b24e9fa\") " pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.147814 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.249709 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.261094 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.308624 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") pod \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.308731 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") pod \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\" (UID: \"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.309743 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" (UID: "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.323618 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.333819 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk" (OuterVolumeSpecName: "kube-api-access-bmdwk") pod "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" (UID: "bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24"). InnerVolumeSpecName "kube-api-access-bmdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411062 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") pod \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411224 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") pod \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\" (UID: \"61ce47a3-89a8-45f2-809e-9aaab0e718e2\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411337 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") pod \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411452 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") pod \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\" (UID: \"0b0be8dd-7b50-43e1-b223-8d5082a0c499\") " Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411840 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdwk\" (UniqueName: \"kubernetes.io/projected/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-kube-api-access-bmdwk\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411836 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61ce47a3-89a8-45f2-809e-9aaab0e718e2" (UID: "61ce47a3-89a8-45f2-809e-9aaab0e718e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.411859 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.412548 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b0be8dd-7b50-43e1-b223-8d5082a0c499" (UID: "0b0be8dd-7b50-43e1-b223-8d5082a0c499"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.416578 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn" (OuterVolumeSpecName: "kube-api-access-wxfdn") pod "61ce47a3-89a8-45f2-809e-9aaab0e718e2" (UID: "61ce47a3-89a8-45f2-809e-9aaab0e718e2"). InnerVolumeSpecName "kube-api-access-wxfdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.433670 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7" (OuterVolumeSpecName: "kube-api-access-7vtd7") pod "0b0be8dd-7b50-43e1-b223-8d5082a0c499" (UID: "0b0be8dd-7b50-43e1-b223-8d5082a0c499"). InnerVolumeSpecName "kube-api-access-7vtd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514331 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b0be8dd-7b50-43e1-b223-8d5082a0c499-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514659 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtd7\" (UniqueName: \"kubernetes.io/projected/0b0be8dd-7b50-43e1-b223-8d5082a0c499-kube-api-access-7vtd7\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514671 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ce47a3-89a8-45f2-809e-9aaab0e718e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.514681 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxfdn\" (UniqueName: \"kubernetes.io/projected/61ce47a3-89a8-45f2-809e-9aaab0e718e2-kube-api-access-wxfdn\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.645419 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qs8g9" event={"ID":"0b0be8dd-7b50-43e1-b223-8d5082a0c499","Type":"ContainerDied","Data":"bbefe8ce4510fe7b158f92ce0dc00c30ab21b7eef1680bc50647aa0b28cbef5d"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.645513 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbefe8ce4510fe7b158f92ce0dc00c30ab21b7eef1680bc50647aa0b28cbef5d" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.645458 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qs8g9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.661139 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.662458 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7vrp9" event={"ID":"bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24","Type":"ContainerDied","Data":"1c5848b548d217213f89febcccd25bcde269e5553708abc872d8390746a63bbb"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.662485 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5848b548d217213f89febcccd25bcde269e5553708abc872d8390746a63bbb" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.662482 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7vrp9" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.664027 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9847-account-create-update-p46tr" event={"ID":"61ce47a3-89a8-45f2-809e-9aaab0e718e2","Type":"ContainerDied","Data":"8ef80d8fdaf645dd8d2bdf1957a895428b93f7cc3fbc4ac309bedad93fb31c93"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.664049 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ef80d8fdaf645dd8d2bdf1957a895428b93f7cc3fbc4ac309bedad93fb31c93" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.664093 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9847-account-create-update-p46tr" Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.670961 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ced7140-d346-43c7-9139-7f460af079e2","Type":"ContainerStarted","Data":"837a82394650be59869c86f7932775bd9f7396ce5d819163e507bb5bc612fb8a"} Jan 30 10:32:25 crc kubenswrapper[4984]: I0130 10:32:25.697713 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.697693677 podStartE2EDuration="5.697693677s" podCreationTimestamp="2026-01-30 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:25.694948103 +0000 UTC m=+1250.261251927" watchObservedRunningTime="2026-01-30 10:32:25.697693677 +0000 UTC m=+1250.263997491" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.050009 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.146281 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0932ca-60dc-45f3-96ed-e8a9c6040375" path="/var/lib/kubelet/pods/5b0932ca-60dc-45f3-96ed-e8a9c6040375/volumes" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.222739 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.355196 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.472179 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") pod \"3c78c96a-fba2-4de8-ab70-a16d31722959\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.472420 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") pod \"3c78c96a-fba2-4de8-ab70-a16d31722959\" (UID: \"3c78c96a-fba2-4de8-ab70-a16d31722959\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.484405 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f" (OuterVolumeSpecName: "kube-api-access-lxt4f") pod "3c78c96a-fba2-4de8-ab70-a16d31722959" (UID: "3c78c96a-fba2-4de8-ab70-a16d31722959"). InnerVolumeSpecName "kube-api-access-lxt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.485587 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c78c96a-fba2-4de8-ab70-a16d31722959" (UID: "3c78c96a-fba2-4de8-ab70-a16d31722959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.577057 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxt4f\" (UniqueName: \"kubernetes.io/projected/3c78c96a-fba2-4de8-ab70-a16d31722959-kube-api-access-lxt4f\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.577117 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c78c96a-fba2-4de8-ab70-a16d31722959-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.703204 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xjhtp" event={"ID":"24e68f06-af93-45d0-bf19-26469cac41f1","Type":"ContainerDied","Data":"fa33540a290efc7162b768b57b3dd915005ebb7fab7039dcf2d2739115fcb47c"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.703268 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa33540a290efc7162b768b57b3dd915005ebb7fab7039dcf2d2739115fcb47c" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.713701 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96bc5a16-54a8-4008-98ea-3adb9b24e9fa","Type":"ContainerStarted","Data":"e157dae2537056a9aade17205b36a1b238748495f14a0b927d45cb2aae736603"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.727442 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" event={"ID":"3c78c96a-fba2-4de8-ab70-a16d31722959","Type":"ContainerDied","Data":"e81cf1f9e79d8b70d2e235029d59bacdc97bdc433a06d9e6b5e9ac828ea06bcf"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.727820 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81cf1f9e79d8b70d2e235029d59bacdc97bdc433a06d9e6b5e9ac828ea06bcf" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.727476 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-32c7-account-create-update-2mdsq" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.742093 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.753009 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f837-account-create-update-tljj4" event={"ID":"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1","Type":"ContainerDied","Data":"575515e274160feb6211a43f20905827d4c5fe15a6ff35e9c803951a2e985f46"} Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.753050 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575515e274160feb6211a43f20905827d4c5fe15a6ff35e9c803951a2e985f46" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.770778 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.770963 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895173 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") pod \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895307 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") pod \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\" (UID: \"4173473e-6a7e-400a-bc3e-2a22d5ef6cd1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895345 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") pod \"24e68f06-af93-45d0-bf19-26469cac41f1\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.895366 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") pod \"24e68f06-af93-45d0-bf19-26469cac41f1\" (UID: \"24e68f06-af93-45d0-bf19-26469cac41f1\") " Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.896429 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24e68f06-af93-45d0-bf19-26469cac41f1" (UID: "24e68f06-af93-45d0-bf19-26469cac41f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.898627 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" (UID: "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.906606 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7" (OuterVolumeSpecName: "kube-api-access-wtcv7") pod "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" (UID: "4173473e-6a7e-400a-bc3e-2a22d5ef6cd1"). InnerVolumeSpecName "kube-api-access-wtcv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.922471 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2" (OuterVolumeSpecName: "kube-api-access-zxxh2") pod "24e68f06-af93-45d0-bf19-26469cac41f1" (UID: "24e68f06-af93-45d0-bf19-26469cac41f1"). InnerVolumeSpecName "kube-api-access-zxxh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.996977 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.997018 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcv7\" (UniqueName: \"kubernetes.io/projected/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1-kube-api-access-wtcv7\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.997033 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxxh2\" (UniqueName: \"kubernetes.io/projected/24e68f06-af93-45d0-bf19-26469cac41f1-kube-api-access-zxxh2\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:26 crc kubenswrapper[4984]: I0130 10:32:26.997046 4984 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e68f06-af93-45d0-bf19-26469cac41f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.440977 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.442693 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" containerID="cri-o://26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09" gracePeriod=30 Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.443317 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" containerID="cri-o://01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d" gracePeriod=30 Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.762824 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96bc5a16-54a8-4008-98ea-3adb9b24e9fa","Type":"ContainerStarted","Data":"f1e3598428243b6bbc619a1616f2f8a7f845042b9830ea6e8b0a96f9caed0944"} Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768562 4984 generic.go:334] "Generic (PLEG): container finished" podID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerID="26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09" exitCode=143 Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768655 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerDied","Data":"26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09"} Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768709 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xjhtp" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.768787 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f837-account-create-update-tljj4" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.845568 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:27 crc kubenswrapper[4984]: I0130 10:32:27.853132 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f6d8f475-hmb99" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.199965 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.405918 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68474f84b8-6pzwt" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.791401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96bc5a16-54a8-4008-98ea-3adb9b24e9fa","Type":"ContainerStarted","Data":"679a0e2026d23d1b3baddab54bccd4fb9d36b4a871c50da9c074d8bbf87cb23c"} Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.796517 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" containerID="cri-o://5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.796908 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerStarted","Data":"295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0"} Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797542 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797621 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" containerID="cri-o://e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797652 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" containerID="cri-o://3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.797708 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" containerID="cri-o://295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0" gracePeriod=30 Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.818516 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.818497805 podStartE2EDuration="4.818497805s" podCreationTimestamp="2026-01-30 10:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:28.808262799 +0000 UTC m=+1253.374566623" watchObservedRunningTime="2026-01-30 10:32:28.818497805 +0000 UTC m=+1253.384801639" Jan 30 10:32:28 crc kubenswrapper[4984]: I0130 10:32:28.837024 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.220540147 podStartE2EDuration="7.837009024s" podCreationTimestamp="2026-01-30 10:32:21 +0000 UTC" firstStartedPulling="2026-01-30 10:32:22.877985749 +0000 UTC m=+1247.444289573" lastFinishedPulling="2026-01-30 10:32:28.494454626 +0000 UTC m=+1253.060758450" observedRunningTime="2026-01-30 10:32:28.833010177 +0000 UTC m=+1253.399314001" watchObservedRunningTime="2026-01-30 10:32:28.837009024 +0000 UTC m=+1253.403312848" Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815084 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f" exitCode=2 Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815122 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af" exitCode=0 Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815123 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f"} Jan 30 10:32:29 crc kubenswrapper[4984]: I0130 10:32:29.815190 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af"} Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.832841 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6" exitCode=0 Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.833240 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6"} Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.840579 4984 generic.go:334] "Generic (PLEG): container finished" podID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerID="01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d" exitCode=0 Jan 30 10:32:30 crc kubenswrapper[4984]: I0130 10:32:30.840654 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerDied","Data":"01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d"} Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.160683 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288678 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288776 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288813 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288923 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.288999 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.289104 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.289149 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") pod \"e5a91d1d-433e-415f-83f8-04185f2bae8e\" (UID: \"e5a91d1d-433e-415f-83f8-04185f2bae8e\") " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.290448 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs" (OuterVolumeSpecName: "logs") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.292078 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.297195 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts" (OuterVolumeSpecName: "scripts") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.298464 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8" (OuterVolumeSpecName: "kube-api-access-7ggp8") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "kube-api-access-7ggp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.301393 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.346510 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.366895 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data" (OuterVolumeSpecName: "config-data") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.375307 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5a91d1d-433e-415f-83f8-04185f2bae8e" (UID: "e5a91d1d-433e-415f-83f8-04185f2bae8e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392031 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392063 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392074 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392088 4984 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392143 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392153 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a91d1d-433e-415f-83f8-04185f2bae8e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392162 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ggp8\" (UniqueName: \"kubernetes.io/projected/e5a91d1d-433e-415f-83f8-04185f2bae8e-kube-api-access-7ggp8\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.392171 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a91d1d-433e-415f-83f8-04185f2bae8e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.419993 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428408 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428767 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428783 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428800 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428805 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428819 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428826 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428843 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428849 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428862 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428867 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428879 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428884 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428893 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428898 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" Jan 30 10:32:31 crc kubenswrapper[4984]: E0130 10:32:31.428907 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.428913 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429087 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429095 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-log" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429105 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429117 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429124 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" containerName="mariadb-account-create-update" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429136 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" containerName="glance-httpd" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429145 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429155 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" containerName="mariadb-database-create" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.429705 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.433905 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.434139 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lllgq" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.434325 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.441415 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.471761 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.494093 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595527 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595588 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.595651 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697507 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697620 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697654 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.697689 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.703427 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.703482 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.705973 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.717565 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"nova-cell0-conductor-db-sync-5wpxl\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.746356 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.861393 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e5a91d1d-433e-415f-83f8-04185f2bae8e","Type":"ContainerDied","Data":"81d909e987140f69c978d7da4bf0a1e0f9d7262be6a29494c8b1e94ebfddf37b"} Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.861449 4984 scope.go:117] "RemoveContainer" containerID="01de83e3aac52a995db4c49e0d5ab1002e876db132c90a614128638ba69e7a8d" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.861522 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.884643 4984 scope.go:117] "RemoveContainer" containerID="26798d5779e66ba0c0b1f299721502091679e91fafc15f7d2c462244a1d07d09" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.927987 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.948145 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.969955 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.971488 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.973777 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.974038 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 10:32:31 crc kubenswrapper[4984]: I0130 10:32:31.986366 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.101809 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a91d1d-433e-415f-83f8-04185f2bae8e" path="/var/lib/kubelet/pods/e5a91d1d-433e-415f-83f8-04185f2bae8e/volumes" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.105877 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.105916 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.105935 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106330 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106419 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106448 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106505 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w556j\" (UniqueName: \"kubernetes.io/projected/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-kube-api-access-w556j\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.106549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.209809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w556j\" (UniqueName: \"kubernetes.io/projected/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-kube-api-access-w556j\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.210187 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.210757 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211117 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211158 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211182 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211341 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.211429 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.212021 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-logs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.212327 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.216851 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.218205 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.218214 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-config-data\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.221041 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-scripts\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.276743 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.277585 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w556j\" (UniqueName: \"kubernetes.io/projected/2fa01bff-d884-4b1f-b0c2-8c0fbd957a30-kube-api-access-w556j\") pod \"glance-default-external-api-0\" (UID: \"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30\") " pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.302355 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.313891 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:32:32 crc kubenswrapper[4984]: W0130 10:32:32.321462 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeaa8458_e32e_4a6f_9e67_3e394d9daa32.slice/crio-c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c WatchSource:0}: Error finding container c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c: Status 404 returned error can't find the container with id c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.775144 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 10:32:32 crc kubenswrapper[4984]: W0130 10:32:32.786501 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fa01bff_d884_4b1f_b0c2_8c0fbd957a30.slice/crio-4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86 WatchSource:0}: Error finding container 4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86: Status 404 returned error can't find the container with id 4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86 Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.874187 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30","Type":"ContainerStarted","Data":"4ffe63863bf682b823a099ff2f7223886ea8d640d8e742f9da30a152a2b58a86"} Jan 30 10:32:32 crc kubenswrapper[4984]: I0130 10:32:32.876027 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerStarted","Data":"c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c"} Jan 30 10:32:33 crc kubenswrapper[4984]: I0130 10:32:33.898751 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30","Type":"ContainerStarted","Data":"36ae8e98f3586be7682cdfe6e2f3a1fabe4f2cc8e732cf8315dd0e85dce69c2c"} Jan 30 10:32:34 crc kubenswrapper[4984]: I0130 10:32:34.912892 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2fa01bff-d884-4b1f-b0c2-8c0fbd957a30","Type":"ContainerStarted","Data":"7a6518c74d129770f26cd6d11dad7296f644bb1ce8f14294e6e0067d40d81472"} Jan 30 10:32:34 crc kubenswrapper[4984]: I0130 10:32:34.950978 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.950957537 podStartE2EDuration="3.950957537s" podCreationTimestamp="2026-01-30 10:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:34.933584349 +0000 UTC m=+1259.499888213" watchObservedRunningTime="2026-01-30 10:32:34.950957537 +0000 UTC m=+1259.517261371" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.324786 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.325068 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.372719 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.390559 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.922883 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:35 crc kubenswrapper[4984]: I0130 10:32:35.922928 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:37 crc kubenswrapper[4984]: I0130 10:32:37.790097 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:37 crc kubenswrapper[4984]: I0130 10:32:37.795200 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 10:32:40 crc kubenswrapper[4984]: I0130 10:32:40.765052 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:40 crc kubenswrapper[4984]: I0130 10:32:40.986004 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerStarted","Data":"f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d"} Jan 30 10:32:41 crc kubenswrapper[4984]: I0130 10:32:41.015889 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" podStartSLOduration=2.405049383 podStartE2EDuration="10.015866437s" podCreationTimestamp="2026-01-30 10:32:31 +0000 UTC" firstStartedPulling="2026-01-30 10:32:32.329696182 +0000 UTC m=+1256.896000006" lastFinishedPulling="2026-01-30 10:32:39.940513236 +0000 UTC m=+1264.506817060" observedRunningTime="2026-01-30 10:32:41.005665882 +0000 UTC m=+1265.571969706" watchObservedRunningTime="2026-01-30 10:32:41.015866437 +0000 UTC m=+1265.582170271" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.302923 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.304067 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.338092 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.349426 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.507503 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5565c8d7-xqnh6" Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.589444 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.590782 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-599cd9b588-9ll76" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" containerID="cri-o://5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" gracePeriod=30 Jan 30 10:32:42 crc kubenswrapper[4984]: I0130 10:32:42.590922 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-599cd9b588-9ll76" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" containerID="cri-o://e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" gracePeriod=30 Jan 30 10:32:43 crc kubenswrapper[4984]: I0130 10:32:43.003170 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:32:43 crc kubenswrapper[4984]: I0130 10:32:43.003203 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 10:32:44 crc kubenswrapper[4984]: I0130 10:32:44.918692 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:32:45 crc kubenswrapper[4984]: I0130 10:32:45.017828 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 10:32:45 crc kubenswrapper[4984]: I0130 10:32:45.029855 4984 generic.go:334] "Generic (PLEG): container finished" podID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" exitCode=0 Jan 30 10:32:45 crc kubenswrapper[4984]: I0130 10:32:45.029928 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerDied","Data":"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08"} Jan 30 10:32:51 crc kubenswrapper[4984]: I0130 10:32:51.723811 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.694870 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.825961 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826143 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826237 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826525 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.826591 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") pod \"4c1c7220-21e6-477f-aa26-eb230da7178f\" (UID: \"4c1c7220-21e6-477f-aa26-eb230da7178f\") " Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.833485 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.833734 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd" (OuterVolumeSpecName: "kube-api-access-z6tgd") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "kube-api-access-z6tgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.883595 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.901490 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config" (OuterVolumeSpecName: "config") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.911988 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4c1c7220-21e6-477f-aa26-eb230da7178f" (UID: "4c1c7220-21e6-477f-aa26-eb230da7178f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929311 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929351 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6tgd\" (UniqueName: \"kubernetes.io/projected/4c1c7220-21e6-477f-aa26-eb230da7178f-kube-api-access-z6tgd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929368 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929380 4984 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:52 crc kubenswrapper[4984]: I0130 10:32:52.929391 4984 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c1c7220-21e6-477f-aa26-eb230da7178f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.100963 4984 generic.go:334] "Generic (PLEG): container finished" podID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" exitCode=0 Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101015 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerDied","Data":"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a"} Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101046 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599cd9b588-9ll76" event={"ID":"4c1c7220-21e6-477f-aa26-eb230da7178f","Type":"ContainerDied","Data":"726ba7faaff55c103e2271e253ff0f17293623696cf0c95eaae899332787dccc"} Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101045 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599cd9b588-9ll76" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.101066 4984 scope.go:117] "RemoveContainer" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.131743 4984 scope.go:117] "RemoveContainer" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.156215 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.158920 4984 scope.go:117] "RemoveContainer" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" Jan 30 10:32:53 crc kubenswrapper[4984]: E0130 10:32:53.159597 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08\": container with ID starting with e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08 not found: ID does not exist" containerID="e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.159651 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08"} err="failed to get container status \"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08\": rpc error: code = NotFound desc = could not find container \"e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08\": container with ID starting with e195160fb320bf18c1b820611221577dfb0a1597e7eb86a94c699e6d8119ac08 not found: ID does not exist" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.159683 4984 scope.go:117] "RemoveContainer" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" Jan 30 10:32:53 crc kubenswrapper[4984]: E0130 10:32:53.160214 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a\": container with ID starting with 5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a not found: ID does not exist" containerID="5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.160267 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a"} err="failed to get container status \"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a\": rpc error: code = NotFound desc = could not find container \"5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a\": container with ID starting with 5362966b6a401f9c35a84ed9019be98989a3634458e5372672530258044e8a2a not found: ID does not exist" Jan 30 10:32:53 crc kubenswrapper[4984]: I0130 10:32:53.165060 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-599cd9b588-9ll76"] Jan 30 10:32:54 crc kubenswrapper[4984]: I0130 10:32:54.100544 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" path="/var/lib/kubelet/pods/4c1c7220-21e6-477f-aa26-eb230da7178f/volumes" Jan 30 10:32:54 crc kubenswrapper[4984]: I0130 10:32:54.110431 4984 generic.go:334] "Generic (PLEG): container finished" podID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerID="f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d" exitCode=0 Jan 30 10:32:54 crc kubenswrapper[4984]: I0130 10:32:54.110513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerDied","Data":"f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d"} Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.482703 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.574690 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.574783 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.574958 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.575010 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") pod \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\" (UID: \"deaa8458-e32e-4a6f-9e67-3e394d9daa32\") " Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.585000 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f" (OuterVolumeSpecName: "kube-api-access-7wr4f") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "kube-api-access-7wr4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.604355 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts" (OuterVolumeSpecName: "scripts") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.613363 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data" (OuterVolumeSpecName: "config-data") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.616403 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deaa8458-e32e-4a6f-9e67-3e394d9daa32" (UID: "deaa8458-e32e-4a6f-9e67-3e394d9daa32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678274 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678329 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678346 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deaa8458-e32e-4a6f-9e67-3e394d9daa32-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:55 crc kubenswrapper[4984]: I0130 10:32:55.678360 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wr4f\" (UniqueName: \"kubernetes.io/projected/deaa8458-e32e-4a6f-9e67-3e394d9daa32-kube-api-access-7wr4f\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.141781 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" event={"ID":"deaa8458-e32e-4a6f-9e67-3e394d9daa32","Type":"ContainerDied","Data":"c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c"} Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.141830 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1de7321e950c778a40f7dac614719a32453f9cea1ae0ffc3932d5b177fdf04c" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.141862 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5wpxl" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.224942 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 10:32:56 crc kubenswrapper[4984]: E0130 10:32:56.225771 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.225794 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" Jan 30 10:32:56 crc kubenswrapper[4984]: E0130 10:32:56.225809 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerName="nova-cell0-conductor-db-sync" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.225819 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerName="nova-cell0-conductor-db-sync" Jan 30 10:32:56 crc kubenswrapper[4984]: E0130 10:32:56.225844 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.225852 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226078 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" containerName="nova-cell0-conductor-db-sync" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226103 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-api" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226120 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1c7220-21e6-477f-aa26-eb230da7178f" containerName="neutron-httpd" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.226831 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.229843 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.231692 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lllgq" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.238040 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.289108 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.289629 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.289757 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28r6\" (UniqueName: \"kubernetes.io/projected/4d02a683-2231-4e04-89bb-748baf8bc65d-kube-api-access-q28r6\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.391526 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.391678 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.391732 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28r6\" (UniqueName: \"kubernetes.io/projected/4d02a683-2231-4e04-89bb-748baf8bc65d-kube-api-access-q28r6\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.397137 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.397978 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d02a683-2231-4e04-89bb-748baf8bc65d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.408838 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28r6\" (UniqueName: \"kubernetes.io/projected/4d02a683-2231-4e04-89bb-748baf8bc65d-kube-api-access-q28r6\") pod \"nova-cell0-conductor-0\" (UID: \"4d02a683-2231-4e04-89bb-748baf8bc65d\") " pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:56 crc kubenswrapper[4984]: I0130 10:32:56.553987 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:57 crc kubenswrapper[4984]: I0130 10:32:57.015287 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 10:32:57 crc kubenswrapper[4984]: I0130 10:32:57.151660 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d02a683-2231-4e04-89bb-748baf8bc65d","Type":"ContainerStarted","Data":"27dce1ad12cff1aa5d095db69bf9b05ed524eefcb192792ee9726010a9ea29bf"} Jan 30 10:32:58 crc kubenswrapper[4984]: I0130 10:32:58.166224 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4d02a683-2231-4e04-89bb-748baf8bc65d","Type":"ContainerStarted","Data":"266948e746c3c855632dfb910262da4a921ac76a4389b29a77dc6bdd2fda4db3"} Jan 30 10:32:58 crc kubenswrapper[4984]: I0130 10:32:58.166587 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 10:32:58 crc kubenswrapper[4984]: I0130 10:32:58.191400 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.191382313 podStartE2EDuration="2.191382313s" podCreationTimestamp="2026-01-30 10:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:32:58.183612093 +0000 UTC m=+1282.749915917" watchObservedRunningTime="2026-01-30 10:32:58.191382313 +0000 UTC m=+1282.757686137" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.178428 4984 generic.go:334] "Generic (PLEG): container finished" podID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerID="295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0" exitCode=137 Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.178498 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0"} Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.179131 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15f1513a-b6e2-45fc-812c-a5dcb490d5bd","Type":"ContainerDied","Data":"c0c4c822948d363ec832d915082d1e20bbbbcf4ed4ee70954c08c129b901a0b2"} Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.179152 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c4c822948d363ec832d915082d1e20bbbbcf4ed4ee70954c08c129b901a0b2" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.254568 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351168 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351210 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351242 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351317 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.351470 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") pod \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\" (UID: \"15f1513a-b6e2-45fc-812c-a5dcb490d5bd\") " Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.352391 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.352640 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.358341 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2" (OuterVolumeSpecName: "kube-api-access-fwkv2") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "kube-api-access-fwkv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.358492 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts" (OuterVolumeSpecName: "scripts") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.376279 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.426736 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.439971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data" (OuterVolumeSpecName: "config-data") pod "15f1513a-b6e2-45fc-812c-a5dcb490d5bd" (UID: "15f1513a-b6e2-45fc-812c-a5dcb490d5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453584 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453612 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453626 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453638 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453648 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453658 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:32:59 crc kubenswrapper[4984]: I0130 10:32:59.453668 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwkv2\" (UniqueName: \"kubernetes.io/projected/15f1513a-b6e2-45fc-812c-a5dcb490d5bd-kube-api-access-fwkv2\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.188627 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.216386 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.226521 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240335 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240772 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240790 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240819 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240826 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240847 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240855 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" Jan 30 10:33:00 crc kubenswrapper[4984]: E0130 10:33:00.240876 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.240883 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241065 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="proxy-httpd" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241085 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-central-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241105 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="sg-core" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.241120 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" containerName="ceilometer-notification-agent" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.242915 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.244829 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.244875 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.260037 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.371593 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.371665 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.371886 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372009 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372079 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372225 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.372307 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475033 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475101 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475127 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475202 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475263 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475323 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.475364 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.476021 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.476398 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.482114 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.483056 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.483645 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.486795 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.515298 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"ceilometer-0\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " pod="openstack/ceilometer-0" Jan 30 10:33:00 crc kubenswrapper[4984]: I0130 10:33:00.560223 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:01 crc kubenswrapper[4984]: I0130 10:33:01.041415 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:01 crc kubenswrapper[4984]: W0130 10:33:01.050500 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c10d6ea_d3d3_49cf_8185_0b4946edc4be.slice/crio-6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4 WatchSource:0}: Error finding container 6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4: Status 404 returned error can't find the container with id 6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4 Jan 30 10:33:01 crc kubenswrapper[4984]: I0130 10:33:01.054950 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:33:01 crc kubenswrapper[4984]: I0130 10:33:01.202569 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4"} Jan 30 10:33:02 crc kubenswrapper[4984]: I0130 10:33:02.102923 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15f1513a-b6e2-45fc-812c-a5dcb490d5bd" path="/var/lib/kubelet/pods/15f1513a-b6e2-45fc-812c-a5dcb490d5bd/volumes" Jan 30 10:33:02 crc kubenswrapper[4984]: I0130 10:33:02.210941 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce"} Jan 30 10:33:03 crc kubenswrapper[4984]: I0130 10:33:03.225427 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb"} Jan 30 10:33:06 crc kubenswrapper[4984]: I0130 10:33:06.580857 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.111611 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.113043 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.116790 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.117037 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.131728 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.201439 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.203540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.204120 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.204643 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.275318 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f"} Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.286612 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.293315 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.296867 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306188 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306236 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306343 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.306555 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.313085 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.314735 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.332359 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.337231 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"nova-cell0-cell-mapping-hphht\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408390 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408633 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.408657 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.437662 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.456385 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.457590 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.460688 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.503472 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.504985 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.521642 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523170 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523218 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523271 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523288 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523306 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523347 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.523387 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.525897 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.540743 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.540936 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.549400 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.552880 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"nova-api-0\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.617320 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.618587 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.620410 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.625318 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.625862 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.625951 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626106 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626219 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626287 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.626440 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.638996 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.640663 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.645597 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.653323 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"nova-cell1-novncproxy-0\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.658747 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.699733 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.722395 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.724527 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.728908 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.728956 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729013 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729078 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729116 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729150 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.729637 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.731951 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.736520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.769541 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.777095 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.819532 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"nova-metadata-0\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " pod="openstack/nova-metadata-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.830958 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832192 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832294 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832372 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832509 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.832545 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.833555 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.833703 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.833762 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.838094 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.853269 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.855156 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"nova-scheduler-0\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.936811 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.940997 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941338 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941500 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941733 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.941858 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.942372 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.943154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.945699 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.945713 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.945768 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.958905 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"dnsmasq-dns-bccf8f775-k2jmh\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:07 crc kubenswrapper[4984]: I0130 10:33:07.984186 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.007406 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.086624 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.112237 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.225463 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.293857 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerStarted","Data":"e5e33b1fca148910fadb2320c0204e6859f314fe51e88bec9fdfc835c9853b27"} Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.298550 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerStarted","Data":"af5529722b6cfea0d21c483516240a6c61e08bb8fa1bfc0ece4e5fb90209726f"} Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.341349 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.528886 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.530422 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.534204 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.534409 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.541043 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.556642 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.559402 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.559523 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.559760 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.607260 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.662949 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.663074 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.663102 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.663324 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.674086 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.675928 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.677519 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.678668 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.690052 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"nova-cell1-conductor-db-sync-nvx8g\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:08 crc kubenswrapper[4984]: W0130 10:33:08.699327 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbcb3e98_2063_421d_a76f_bca749fa2824.slice/crio-075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d WatchSource:0}: Error finding container 075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d: Status 404 returned error can't find the container with id 075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.703300 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:08 crc kubenswrapper[4984]: I0130 10:33:08.856564 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.315226 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerStarted","Data":"d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.322202 4984 generic.go:334] "Generic (PLEG): container finished" podID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerID="aa1f69e5832486947c309113f3fb6a6493f2b91d3f8828fd6cfe76af73d8b0a8" exitCode=0 Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.322387 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerDied","Data":"aa1f69e5832486947c309113f3fb6a6493f2b91d3f8828fd6cfe76af73d8b0a8"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.322419 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerStarted","Data":"d321da41062e4b6042ed3a9bb6a7b9877923a06f6b266f1b243b188fd84ea8bc"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.324092 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerStarted","Data":"ecd4903f8d6e5a12e35abf7e02e0342af660b1313797fb073846a8e0fffb44cd"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.328390 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerStarted","Data":"d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.328805 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.331062 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerStarted","Data":"075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.333164 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerStarted","Data":"e2e94b843b76f25cb381e78e8277fe2519e55455a14ac71ad0fc880044917b0f"} Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.339686 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hphht" podStartSLOduration=2.339666321 podStartE2EDuration="2.339666321s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:09.334760489 +0000 UTC m=+1293.901064313" watchObservedRunningTime="2026-01-30 10:33:09.339666321 +0000 UTC m=+1293.905970145" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.370062 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.134710124 podStartE2EDuration="9.37003541s" podCreationTimestamp="2026-01-30 10:33:00 +0000 UTC" firstStartedPulling="2026-01-30 10:33:01.054490121 +0000 UTC m=+1285.620793985" lastFinishedPulling="2026-01-30 10:33:08.289815437 +0000 UTC m=+1292.856119271" observedRunningTime="2026-01-30 10:33:09.361521941 +0000 UTC m=+1293.927825775" watchObservedRunningTime="2026-01-30 10:33:09.37003541 +0000 UTC m=+1293.936339234" Jan 30 10:33:09 crc kubenswrapper[4984]: I0130 10:33:09.399609 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.351015 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerStarted","Data":"dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f"} Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.352768 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.358604 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerStarted","Data":"01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df"} Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.358641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerStarted","Data":"7460d26f16ced1d1e6a9ddf520dce3ce58c888acd0fc9117f073f9d56ecfe696"} Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.379064 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" podStartSLOduration=3.379042303 podStartE2EDuration="3.379042303s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:10.366548076 +0000 UTC m=+1294.932851900" watchObservedRunningTime="2026-01-30 10:33:10.379042303 +0000 UTC m=+1294.945346127" Jan 30 10:33:10 crc kubenswrapper[4984]: I0130 10:33:10.390739 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" podStartSLOduration=2.390721188 podStartE2EDuration="2.390721188s" podCreationTimestamp="2026-01-30 10:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:10.381732716 +0000 UTC m=+1294.948036540" watchObservedRunningTime="2026-01-30 10:33:10.390721188 +0000 UTC m=+1294.957025012" Jan 30 10:33:11 crc kubenswrapper[4984]: I0130 10:33:11.028933 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:11 crc kubenswrapper[4984]: I0130 10:33:11.037315 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.408836 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerStarted","Data":"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.408895 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" gracePeriod=30 Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.421041 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerStarted","Data":"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.421084 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerStarted","Data":"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.422646 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerStarted","Data":"955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430220 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerStarted","Data":"9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430343 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerStarted","Data":"a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b"} Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430471 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" containerID="cri-o://a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b" gracePeriod=30 Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.430586 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" containerID="cri-o://9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179" gracePeriod=30 Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.445485 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.39593471 podStartE2EDuration="7.445460965s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.249708686 +0000 UTC m=+1292.816012510" lastFinishedPulling="2026-01-30 10:33:13.299234931 +0000 UTC m=+1297.865538765" observedRunningTime="2026-01-30 10:33:14.428083656 +0000 UTC m=+1298.994387480" watchObservedRunningTime="2026-01-30 10:33:14.445460965 +0000 UTC m=+1299.011764789" Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.449684 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.851120446 podStartE2EDuration="7.449664598s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.700688919 +0000 UTC m=+1293.266992743" lastFinishedPulling="2026-01-30 10:33:13.299233071 +0000 UTC m=+1297.865536895" observedRunningTime="2026-01-30 10:33:14.447178321 +0000 UTC m=+1299.013482145" watchObservedRunningTime="2026-01-30 10:33:14.449664598 +0000 UTC m=+1299.015968422" Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.495042 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.529207184 podStartE2EDuration="7.494988751s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.360810532 +0000 UTC m=+1292.927114356" lastFinishedPulling="2026-01-30 10:33:13.326592099 +0000 UTC m=+1297.892895923" observedRunningTime="2026-01-30 10:33:14.467107679 +0000 UTC m=+1299.033411513" watchObservedRunningTime="2026-01-30 10:33:14.494988751 +0000 UTC m=+1299.061292585" Jan 30 10:33:14 crc kubenswrapper[4984]: I0130 10:33:14.513093 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.843283785 podStartE2EDuration="7.513075368s" podCreationTimestamp="2026-01-30 10:33:07 +0000 UTC" firstStartedPulling="2026-01-30 10:33:08.62768867 +0000 UTC m=+1293.193992504" lastFinishedPulling="2026-01-30 10:33:13.297480263 +0000 UTC m=+1297.863784087" observedRunningTime="2026-01-30 10:33:14.491814315 +0000 UTC m=+1299.058118129" watchObservedRunningTime="2026-01-30 10:33:14.513075368 +0000 UTC m=+1299.079379192" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.441836 4984 generic.go:334] "Generic (PLEG): container finished" podID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerID="9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179" exitCode=0 Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.442122 4984 generic.go:334] "Generic (PLEG): container finished" podID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerID="a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b" exitCode=143 Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.441920 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerDied","Data":"9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179"} Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.442180 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerDied","Data":"a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b"} Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.567438 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622273 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622478 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622540 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.622606 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") pod \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\" (UID: \"6a7be6e3-d6f3-4aef-b870-985a4e3a400f\") " Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.623213 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs" (OuterVolumeSpecName: "logs") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.640784 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm" (OuterVolumeSpecName: "kube-api-access-wjvmm") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "kube-api-access-wjvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.648579 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.654486 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data" (OuterVolumeSpecName: "config-data") pod "6a7be6e3-d6f3-4aef-b870-985a4e3a400f" (UID: "6a7be6e3-d6f3-4aef-b870-985a4e3a400f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.724978 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.725023 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.725039 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvmm\" (UniqueName: \"kubernetes.io/projected/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-kube-api-access-wjvmm\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:15 crc kubenswrapper[4984]: I0130 10:33:15.725054 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a7be6e3-d6f3-4aef-b870-985a4e3a400f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:16 crc kubenswrapper[4984]: E0130 10:33:16.218381 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a7be6e3_d6f3_4aef_b870_985a4e3a400f.slice\": RecentStats: unable to find data in memory cache]" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.455622 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a7be6e3-d6f3-4aef-b870-985a4e3a400f","Type":"ContainerDied","Data":"e2e94b843b76f25cb381e78e8277fe2519e55455a14ac71ad0fc880044917b0f"} Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.455678 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.455692 4984 scope.go:117] "RemoveContainer" containerID="9e7d20b9f7c851362fbd2365e079babc74baacf84a4c46f58fcbe6a6be226179" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.491410 4984 scope.go:117] "RemoveContainer" containerID="a6b2b7ca2e0c2e4207a38b412a2bba0a5eec40695587b7a97f6901bee0d49a8b" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.499462 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.518649 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527328 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: E0130 10:33:16.527770 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527789 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" Jan 30 10:33:16 crc kubenswrapper[4984]: E0130 10:33:16.527801 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527807 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.527997 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-log" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.528012 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" containerName="nova-metadata-metadata" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.529093 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.531410 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.531434 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.536056 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638193 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638293 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638332 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.638493 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.739940 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740003 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740049 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740093 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740201 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.740678 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.749927 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.751127 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.764579 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.764609 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " pod="openstack/nova-metadata-0" Jan 30 10:33:16 crc kubenswrapper[4984]: I0130 10:33:16.872359 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.168191 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.466102 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerStarted","Data":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.466565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerStarted","Data":"4144ee9c2a28e71f131e6c10f223d1b110888ba0da851c6cf5c4df3303551826"} Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.469540 4984 generic.go:334] "Generic (PLEG): container finished" podID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerID="d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57" exitCode=0 Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.469580 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerDied","Data":"d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57"} Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.660043 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.733524 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:17 crc kubenswrapper[4984]: I0130 10:33:17.733585 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.009999 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.010382 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.041011 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.088427 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.104447 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7be6e3-d6f3-4aef-b870-985a4e3a400f" path="/var/lib/kubelet/pods/6a7be6e3-d6f3-4aef-b870-985a4e3a400f/volumes" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.175274 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.176801 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" containerID="cri-o://98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0" gracePeriod=10 Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.504421 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerStarted","Data":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.506763 4984 generic.go:334] "Generic (PLEG): container finished" podID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerID="98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0" exitCode=0 Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.506844 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerDied","Data":"98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0"} Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.528416 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.528390482 podStartE2EDuration="2.528390482s" podCreationTimestamp="2026-01-30 10:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:18.527029725 +0000 UTC m=+1303.093333539" watchObservedRunningTime="2026-01-30 10:33:18.528390482 +0000 UTC m=+1303.094694306" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.566131 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.754398 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.779742 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.779898 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.779938 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.780012 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.780050 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.780075 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") pod \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\" (UID: \"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50\") " Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.793146 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz" (OuterVolumeSpecName: "kube-api-access-ttccz") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "kube-api-access-ttccz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.824387 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.824657 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.853459 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.868153 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config" (OuterVolumeSpecName: "config") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.870392 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.878640 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.881951 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.881987 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.882001 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.882017 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.882030 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttccz\" (UniqueName: \"kubernetes.io/projected/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-kube-api-access-ttccz\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.920023 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" (UID: "1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.984239 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:18 crc kubenswrapper[4984]: I0130 10:33:18.996542 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.085959 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.086295 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.086350 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.086428 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") pod \"e9c9c509-275d-47bc-81f8-755bab6b2be8\" (UID: \"e9c9c509-275d-47bc-81f8-755bab6b2be8\") " Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.098042 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts" (OuterVolumeSpecName: "scripts") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.113163 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5" (OuterVolumeSpecName: "kube-api-access-bjqc5") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "kube-api-access-bjqc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.127966 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.136451 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data" (OuterVolumeSpecName: "config-data") pod "e9c9c509-275d-47bc-81f8-755bab6b2be8" (UID: "e9c9c509-275d-47bc-81f8-755bab6b2be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.188971 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.189533 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.189665 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjqc5\" (UniqueName: \"kubernetes.io/projected/e9c9c509-275d-47bc-81f8-755bab6b2be8-kube-api-access-bjqc5\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.189683 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9c9c509-275d-47bc-81f8-755bab6b2be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.518467 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.518460 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-k7xgx" event={"ID":"1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50","Type":"ContainerDied","Data":"944e20dd436d6475eadb44cdbbb965933e9f99f6729e1968115646dd8b334bc1"} Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.518726 4984 scope.go:117] "RemoveContainer" containerID="98b69ea2327bc52179d444583bf88f848e8c7346b6f999b944ab04e0cf5278b0" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.528835 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hphht" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.529872 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hphht" event={"ID":"e9c9c509-275d-47bc-81f8-755bab6b2be8","Type":"ContainerDied","Data":"e5e33b1fca148910fadb2320c0204e6859f314fe51e88bec9fdfc835c9853b27"} Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.529970 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e33b1fca148910fadb2320c0204e6859f314fe51e88bec9fdfc835c9853b27" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.559295 4984 scope.go:117] "RemoveContainer" containerID="56bcdf99f2e8704de387e7830f17377f1640401317904a569f1e4bd023c74298" Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.571567 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.580619 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-k7xgx"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.601193 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.604737 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" containerID="cri-o://5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" gracePeriod=30 Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.605474 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" containerID="cri-o://5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" gracePeriod=30 Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.619708 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:19 crc kubenswrapper[4984]: I0130 10:33:19.641763 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.107603 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" path="/var/lib/kubelet/pods/1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50/volumes" Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.539816 4984 generic.go:334] "Generic (PLEG): container finished" podID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" exitCode=143 Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.540117 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" containerID="cri-o://ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" gracePeriod=30 Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.540224 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerDied","Data":"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a"} Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.540440 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" containerID="cri-o://955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b" gracePeriod=30 Jan 30 10:33:20 crc kubenswrapper[4984]: I0130 10:33:20.541035 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" containerID="cri-o://94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" gracePeriod=30 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.162795 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.237825 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.237946 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.238031 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.238076 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.238208 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") pod \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\" (UID: \"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.239116 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs" (OuterVolumeSpecName: "logs") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.244861 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7" (OuterVolumeSpecName: "kube-api-access-w5fg7") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "kube-api-access-w5fg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.265293 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.272561 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data" (OuterVolumeSpecName: "config-data") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.290577 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" (UID: "0bf7bcf5-b244-4f85-aa06-1bc47e550ec0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.340653 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5fg7\" (UniqueName: \"kubernetes.io/projected/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-kube-api-access-w5fg7\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.340861 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.340965 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.341043 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.341138 4984 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.552618 4984 generic.go:334] "Generic (PLEG): container finished" podID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerID="955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b" exitCode=0 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.552702 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerDied","Data":"955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560230 4984 generic.go:334] "Generic (PLEG): container finished" podID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" exitCode=0 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560288 4984 generic.go:334] "Generic (PLEG): container finished" podID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" exitCode=143 Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560310 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerDied","Data":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560340 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560412 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerDied","Data":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560439 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0bf7bcf5-b244-4f85-aa06-1bc47e550ec0","Type":"ContainerDied","Data":"4144ee9c2a28e71f131e6c10f223d1b110888ba0da851c6cf5c4df3303551826"} Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.560468 4984 scope.go:117] "RemoveContainer" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.590464 4984 scope.go:117] "RemoveContainer" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.609572 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.628702 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644306 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644791 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644820 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644840 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644849 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644876 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="init" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644885 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="init" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644917 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644927 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.644946 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerName="nova-manage" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.644955 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerName="nova-manage" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645168 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-log" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645190 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa2e9dc-ca07-4e9e-8d8f-ee8ebc41ca50" containerName="dnsmasq-dns" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645212 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" containerName="nova-metadata-metadata" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.645222 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" containerName="nova-manage" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.646338 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.652561 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.652770 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.660877 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.670655 4984 scope.go:117] "RemoveContainer" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.671018 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": container with ID starting with 94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a not found: ID does not exist" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.671051 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} err="failed to get container status \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": rpc error: code = NotFound desc = could not find container \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": container with ID starting with 94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.671072 4984 scope.go:117] "RemoveContainer" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: E0130 10:33:21.672620 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": container with ID starting with ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264 not found: ID does not exist" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.672651 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} err="failed to get container status \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": rpc error: code = NotFound desc = could not find container \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": container with ID starting with ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264 not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.672669 4984 scope.go:117] "RemoveContainer" containerID="94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.674327 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a"} err="failed to get container status \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": rpc error: code = NotFound desc = could not find container \"94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a\": container with ID starting with 94c3694db8b90e44e4fb93f9adf8a7229db0922a289be28bfefecde9fb80b24a not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.674355 4984 scope.go:117] "RemoveContainer" containerID="ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.677427 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264"} err="failed to get container status \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": rpc error: code = NotFound desc = could not find container \"ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264\": container with ID starting with ff9602d35a0296c2ef84fbc3aa9d20fe0712c6a7a0538ac49dd3a00fff601264 not found: ID does not exist" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750617 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750683 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750762 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750808 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.750830 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.812410 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.852654 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") pod \"cbcb3e98-2063-421d-a76f-bca749fa2824\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853018 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") pod \"cbcb3e98-2063-421d-a76f-bca749fa2824\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853272 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") pod \"cbcb3e98-2063-421d-a76f-bca749fa2824\" (UID: \"cbcb3e98-2063-421d-a76f-bca749fa2824\") " Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853756 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.853898 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854091 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854300 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.854394 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.858091 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.858576 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.858754 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb" (OuterVolumeSpecName: "kube-api-access-b9cvb") pod "cbcb3e98-2063-421d-a76f-bca749fa2824" (UID: "cbcb3e98-2063-421d-a76f-bca749fa2824"). InnerVolumeSpecName "kube-api-access-b9cvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.859508 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.870204 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"nova-metadata-0\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " pod="openstack/nova-metadata-0" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.882436 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbcb3e98-2063-421d-a76f-bca749fa2824" (UID: "cbcb3e98-2063-421d-a76f-bca749fa2824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.892371 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data" (OuterVolumeSpecName: "config-data") pod "cbcb3e98-2063-421d-a76f-bca749fa2824" (UID: "cbcb3e98-2063-421d-a76f-bca749fa2824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.956889 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.956962 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9cvb\" (UniqueName: \"kubernetes.io/projected/cbcb3e98-2063-421d-a76f-bca749fa2824-kube-api-access-b9cvb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.956978 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcb3e98-2063-421d-a76f-bca749fa2824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:21 crc kubenswrapper[4984]: I0130 10:33:21.974322 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.143833 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf7bcf5-b244-4f85-aa06-1bc47e550ec0" path="/var/lib/kubelet/pods/0bf7bcf5-b244-4f85-aa06-1bc47e550ec0/volumes" Jan 30 10:33:22 crc kubenswrapper[4984]: W0130 10:33:22.473643 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40bafb7_7a35_49bc_aaed_9249967a6da1.slice/crio-390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72 WatchSource:0}: Error finding container 390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72: Status 404 returned error can't find the container with id 390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72 Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.473736 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.573333 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerStarted","Data":"390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72"} Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.581831 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cbcb3e98-2063-421d-a76f-bca749fa2824","Type":"ContainerDied","Data":"075badfc7e33df4bc00e0326585059cd80f908a2f58bb80568d432c2433bd27d"} Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.582202 4984 scope.go:117] "RemoveContainer" containerID="955a1fdfc4233ad8496439d5141f675e875676ae223b8bfb4cc454cea966611b" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.582086 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.616363 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.636390 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.644819 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: E0130 10:33:22.645162 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.645178 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.645393 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" containerName="nova-scheduler-scheduler" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.646115 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.650256 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.654051 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.774440 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.774493 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.774592 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.876505 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.876569 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.876670 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.880670 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.882440 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.894985 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"nova-scheduler-0\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " pod="openstack/nova-scheduler-0" Jan 30 10:33:22 crc kubenswrapper[4984]: I0130 10:33:22.968287 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.440112 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:33:23 crc kubenswrapper[4984]: W0130 10:33:23.445221 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f8d034_e2e3_4db8_85b8_00459162d5ef.slice/crio-20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3 WatchSource:0}: Error finding container 20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3: Status 404 returned error can't find the container with id 20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3 Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.595219 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerStarted","Data":"c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe"} Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.595533 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerStarted","Data":"07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75"} Jan 30 10:33:23 crc kubenswrapper[4984]: I0130 10:33:23.597449 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerStarted","Data":"20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.103121 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbcb3e98-2063-421d-a76f-bca749fa2824" path="/var/lib/kubelet/pods/cbcb3e98-2063-421d-a76f-bca749fa2824/volumes" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.448862 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.465307 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.465227447 podStartE2EDuration="3.465227447s" podCreationTimestamp="2026-01-30 10:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:23.616776084 +0000 UTC m=+1308.183079918" watchObservedRunningTime="2026-01-30 10:33:24.465227447 +0000 UTC m=+1309.031531271" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.504910 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.505009 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.505040 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.505156 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") pod \"53602417-9f58-4125-ae4e-50a4acbd15c6\" (UID: \"53602417-9f58-4125-ae4e-50a4acbd15c6\") " Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.506102 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs" (OuterVolumeSpecName: "logs") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.510206 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh" (OuterVolumeSpecName: "kube-api-access-wlhnh") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "kube-api-access-wlhnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.537523 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data" (OuterVolumeSpecName: "config-data") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.542820 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53602417-9f58-4125-ae4e-50a4acbd15c6" (UID: "53602417-9f58-4125-ae4e-50a4acbd15c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606729 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53602417-9f58-4125-ae4e-50a4acbd15c6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606757 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhnh\" (UniqueName: \"kubernetes.io/projected/53602417-9f58-4125-ae4e-50a4acbd15c6-kube-api-access-wlhnh\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606766 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.606775 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53602417-9f58-4125-ae4e-50a4acbd15c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.608802 4984 generic.go:334] "Generic (PLEG): container finished" podID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" exitCode=0 Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.608960 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerDied","Data":"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.608987 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53602417-9f58-4125-ae4e-50a4acbd15c6","Type":"ContainerDied","Data":"ecd4903f8d6e5a12e35abf7e02e0342af660b1313797fb073846a8e0fffb44cd"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.609002 4984 scope.go:117] "RemoveContainer" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.609098 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.621861 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerStarted","Data":"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee"} Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.644239 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.644223645 podStartE2EDuration="2.644223645s" podCreationTimestamp="2026-01-30 10:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:24.640638878 +0000 UTC m=+1309.206942702" watchObservedRunningTime="2026-01-30 10:33:24.644223645 +0000 UTC m=+1309.210527469" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.665561 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.669886 4984 scope.go:117] "RemoveContainer" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.675013 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.696225 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.696847 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.696870 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.696890 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.696898 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.697090 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-log" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.697110 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" containerName="nova-api-api" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.701178 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.704225 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.718426 4984 scope.go:117] "RemoveContainer" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.719459 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978\": container with ID starting with 5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978 not found: ID does not exist" containerID="5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.719502 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978"} err="failed to get container status \"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978\": rpc error: code = NotFound desc = could not find container \"5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978\": container with ID starting with 5396dff530328155571e7e17bbd9e8c62a31764aa303e54531f1f3f363b2a978 not found: ID does not exist" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.719531 4984 scope.go:117] "RemoveContainer" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" Jan 30 10:33:24 crc kubenswrapper[4984]: E0130 10:33:24.719785 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a\": container with ID starting with 5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a not found: ID does not exist" containerID="5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.719807 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a"} err="failed to get container status \"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a\": rpc error: code = NotFound desc = could not find container \"5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a\": container with ID starting with 5f878f574634dc82ae32b93067926a334ffb2d10361f3b457300aa3073da0d1a not found: ID does not exist" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.733157 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.809668 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.810016 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.810125 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.810209 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912262 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912354 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912392 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.912441 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.913972 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.921373 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.921552 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:24 crc kubenswrapper[4984]: I0130 10:33:24.944687 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"nova-api-0\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " pod="openstack/nova-api-0" Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.029562 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.345463 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:25 crc kubenswrapper[4984]: W0130 10:33:25.347932 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod808d797f_903f_4730_a470_4f78f53409ae.slice/crio-e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36 WatchSource:0}: Error finding container e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36: Status 404 returned error can't find the container with id e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36 Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.631384 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerStarted","Data":"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4"} Jan 30 10:33:25 crc kubenswrapper[4984]: I0130 10:33:25.631442 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerStarted","Data":"e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36"} Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.105141 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53602417-9f58-4125-ae4e-50a4acbd15c6" path="/var/lib/kubelet/pods/53602417-9f58-4125-ae4e-50a4acbd15c6/volumes" Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.642919 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerStarted","Data":"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791"} Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.667043 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.667025261 podStartE2EDuration="2.667025261s" podCreationTimestamp="2026-01-30 10:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:26.665281583 +0000 UTC m=+1311.231585407" watchObservedRunningTime="2026-01-30 10:33:26.667025261 +0000 UTC m=+1311.233329085" Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.974949 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:33:26 crc kubenswrapper[4984]: I0130 10:33:26.975345 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:33:27 crc kubenswrapper[4984]: I0130 10:33:27.969209 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 10:33:30 crc kubenswrapper[4984]: I0130 10:33:30.567846 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 10:33:31 crc kubenswrapper[4984]: I0130 10:33:31.975587 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:33:31 crc kubenswrapper[4984]: I0130 10:33:31.975988 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:33:32 crc kubenswrapper[4984]: I0130 10:33:32.968886 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.002140 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058495 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058572 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058781 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.058812 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:33:33 crc kubenswrapper[4984]: I0130 10:33:33.741315 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.001203 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.001458 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" containerID="cri-o://5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" gracePeriod=30 Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.030898 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.030947 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.554194 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.733917 4984 generic.go:334] "Generic (PLEG): container finished" podID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" exitCode=2 Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.733982 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerDied","Data":"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822"} Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.734008 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2d180dfe-bc61-4961-b672-20c6ff8c2911","Type":"ContainerDied","Data":"f05fd5917bae61700291c3765574cc3a3b08139624adb6fb3ccd5f7058c55fa6"} Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.734023 4984 scope.go:117] "RemoveContainer" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.734130 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.736517 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") pod \"2d180dfe-bc61-4961-b672-20c6ff8c2911\" (UID: \"2d180dfe-bc61-4961-b672-20c6ff8c2911\") " Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.741903 4984 generic.go:334] "Generic (PLEG): container finished" podID="6148a148-07c4-4584-95ff-10d5e5147954" containerID="01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df" exitCode=0 Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.741952 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerDied","Data":"01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df"} Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.749641 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h" (OuterVolumeSpecName: "kube-api-access-psg4h") pod "2d180dfe-bc61-4961-b672-20c6ff8c2911" (UID: "2d180dfe-bc61-4961-b672-20c6ff8c2911"). InnerVolumeSpecName "kube-api-access-psg4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.817768 4984 scope.go:117] "RemoveContainer" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" Jan 30 10:33:35 crc kubenswrapper[4984]: E0130 10:33:35.818845 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822\": container with ID starting with 5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822 not found: ID does not exist" containerID="5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.818888 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822"} err="failed to get container status \"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822\": rpc error: code = NotFound desc = could not find container \"5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822\": container with ID starting with 5b5a1d10b4e6537aaf08ec1279c154092de58057ec036f425ee18acce6ca7822 not found: ID does not exist" Jan 30 10:33:35 crc kubenswrapper[4984]: I0130 10:33:35.838828 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psg4h\" (UniqueName: \"kubernetes.io/projected/2d180dfe-bc61-4961-b672-20c6ff8c2911-kube-api-access-psg4h\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.077177 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.089301 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.112581 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" path="/var/lib/kubelet/pods/2d180dfe-bc61-4961-b672-20c6ff8c2911/volumes" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113302 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113433 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:36 crc kubenswrapper[4984]: E0130 10:33:36.113650 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113668 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113715 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.113861 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d180dfe-bc61-4961-b672-20c6ff8c2911" containerName="kube-state-metrics" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.114661 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.117693 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.117890 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.123303 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250196 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250300 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tr2b\" (UniqueName: \"kubernetes.io/projected/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-api-access-9tr2b\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250555 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.250660 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352313 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352395 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352450 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tr2b\" (UniqueName: \"kubernetes.io/projected/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-api-access-9tr2b\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.352576 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.373558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.373650 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.373732 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.377711 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tr2b\" (UniqueName: \"kubernetes.io/projected/6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa-kube-api-access-9tr2b\") pod \"kube-state-metrics-0\" (UID: \"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa\") " pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.443367 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 10:33:36 crc kubenswrapper[4984]: I0130 10:33:36.946772 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.043869 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057289 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057614 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" containerID="cri-o://33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057776 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" containerID="cri-o://d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057833 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" containerID="cri-o://26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.057875 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" containerID="cri-o://6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb" gracePeriod=30 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.083758 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.084059 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.084236 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.084306 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") pod \"6148a148-07c4-4584-95ff-10d5e5147954\" (UID: \"6148a148-07c4-4584-95ff-10d5e5147954\") " Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.094706 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg" (OuterVolumeSpecName: "kube-api-access-68tjg") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "kube-api-access-68tjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.100416 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts" (OuterVolumeSpecName: "scripts") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.124012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.137593 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data" (OuterVolumeSpecName: "config-data") pod "6148a148-07c4-4584-95ff-10d5e5147954" (UID: "6148a148-07c4-4584-95ff-10d5e5147954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190226 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68tjg\" (UniqueName: \"kubernetes.io/projected/6148a148-07c4-4584-95ff-10d5e5147954-kube-api-access-68tjg\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190557 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190573 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.190585 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6148a148-07c4-4584-95ff-10d5e5147954-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.764682 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa","Type":"ContainerStarted","Data":"d27c80cb732878754ae4da63499033ddf9f11a0242673f07d88fc64594e78890"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.764755 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa","Type":"ContainerStarted","Data":"735c4f2833a6a7a155fb0b86bc128f1b0093f6574118e6330b7e8b132f11d425"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.765921 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.767277 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" event={"ID":"6148a148-07c4-4584-95ff-10d5e5147954","Type":"ContainerDied","Data":"7460d26f16ced1d1e6a9ddf520dce3ce58c888acd0fc9117f073f9d56ecfe696"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.767301 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7460d26f16ced1d1e6a9ddf520dce3ce58c888acd0fc9117f073f9d56ecfe696" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.767338 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nvx8g" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773215 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179" exitCode=0 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773269 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f" exitCode=2 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773282 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce" exitCode=0 Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773309 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773337 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.773351 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce"} Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.789484 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.415074564 podStartE2EDuration="1.789465002s" podCreationTimestamp="2026-01-30 10:33:36 +0000 UTC" firstStartedPulling="2026-01-30 10:33:36.964670038 +0000 UTC m=+1321.530973862" lastFinishedPulling="2026-01-30 10:33:37.339060476 +0000 UTC m=+1321.905364300" observedRunningTime="2026-01-30 10:33:37.784047697 +0000 UTC m=+1322.350351521" watchObservedRunningTime="2026-01-30 10:33:37.789465002 +0000 UTC m=+1322.355768826" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.849691 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: E0130 10:33:37.850112 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6148a148-07c4-4584-95ff-10d5e5147954" containerName="nova-cell1-conductor-db-sync" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.850145 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6148a148-07c4-4584-95ff-10d5e5147954" containerName="nova-cell1-conductor-db-sync" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.850393 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6148a148-07c4-4584-95ff-10d5e5147954" containerName="nova-cell1-conductor-db-sync" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.851114 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.856730 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.870537 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.938180 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.938235 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7sw\" (UniqueName: \"kubernetes.io/projected/5b097926-177e-428a-a271-ede45f90f7d6-kube-api-access-fn7sw\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:37 crc kubenswrapper[4984]: I0130 10:33:37.938426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.040496 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.040683 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.040719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7sw\" (UniqueName: \"kubernetes.io/projected/5b097926-177e-428a-a271-ede45f90f7d6-kube-api-access-fn7sw\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.049003 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.060476 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b097926-177e-428a-a271-ede45f90f7d6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.080890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7sw\" (UniqueName: \"kubernetes.io/projected/5b097926-177e-428a-a271-ede45f90f7d6-kube-api-access-fn7sw\") pod \"nova-cell1-conductor-0\" (UID: \"5b097926-177e-428a-a271-ede45f90f7d6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.166193 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:38 crc kubenswrapper[4984]: W0130 10:33:38.670749 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b097926_177e_428a_a271_ede45f90f7d6.slice/crio-bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0 WatchSource:0}: Error finding container bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0: Status 404 returned error can't find the container with id bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0 Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.682961 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 10:33:38 crc kubenswrapper[4984]: I0130 10:33:38.790650 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5b097926-177e-428a-a271-ede45f90f7d6","Type":"ContainerStarted","Data":"bbb4e909e3985f8e88e4fcf4d73405385da24585e7bbe160a935fdeb9a51b0d0"} Jan 30 10:33:39 crc kubenswrapper[4984]: I0130 10:33:39.802447 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5b097926-177e-428a-a271-ede45f90f7d6","Type":"ContainerStarted","Data":"2305d9ce55d27314f10eb520e705ef9a5bf155953791d41477a95951fc2306ef"} Jan 30 10:33:39 crc kubenswrapper[4984]: I0130 10:33:39.827455 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.827437787 podStartE2EDuration="2.827437787s" podCreationTimestamp="2026-01-30 10:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:39.817502279 +0000 UTC m=+1324.383806123" watchObservedRunningTime="2026-01-30 10:33:39.827437787 +0000 UTC m=+1324.393741611" Jan 30 10:33:40 crc kubenswrapper[4984]: I0130 10:33:40.811113 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.823185 4984 generic.go:334] "Generic (PLEG): container finished" podID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerID="6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb" exitCode=0 Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.824476 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb"} Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.824507 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c10d6ea-d3d3-49cf-8185-0b4946edc4be","Type":"ContainerDied","Data":"6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4"} Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.824517 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6707d35ad9110f663f69579f674ea06d766b5ae489b9cf59448a60a1777eb0d4" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.848958 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.982173 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.984398 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:33:41 crc kubenswrapper[4984]: I0130 10:33:41.994375 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009706 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009816 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009869 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.009893 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010112 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010204 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010593 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.010787 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") pod \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\" (UID: \"5c10d6ea-d3d3-49cf-8185-0b4946edc4be\") " Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.011442 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.011463 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.015446 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm" (OuterVolumeSpecName: "kube-api-access-kg4jm") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "kube-api-access-kg4jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.028366 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts" (OuterVolumeSpecName: "scripts") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.044288 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.114415 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.114636 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg4jm\" (UniqueName: \"kubernetes.io/projected/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-kube-api-access-kg4jm\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.114721 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.186428 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.193456 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data" (OuterVolumeSpecName: "config-data") pod "5c10d6ea-d3d3-49cf-8185-0b4946edc4be" (UID: "5c10d6ea-d3d3-49cf-8185-0b4946edc4be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.222906 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.222946 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c10d6ea-d3d3-49cf-8185-0b4946edc4be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.832353 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.843280 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.890181 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.897916 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.932464 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933052 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933072 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933101 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933109 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933133 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933141 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" Jan 30 10:33:42 crc kubenswrapper[4984]: E0130 10:33:42.933158 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933165 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933428 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="proxy-httpd" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933454 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-central-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933478 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="sg-core" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.933490 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" containerName="ceilometer-notification-agent" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.937819 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.940776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.941076 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.941190 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:33:42 crc kubenswrapper[4984]: I0130 10:33:42.952155 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141497 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141543 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141610 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141676 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141724 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141785 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.141882 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.195130 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244269 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244333 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244696 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244732 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244805 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244834 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.244911 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.246202 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.246667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.250268 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.251933 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.252043 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.252710 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.253154 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.266209 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"ceilometer-0\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.291091 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:43 crc kubenswrapper[4984]: I0130 10:33:43.838841 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.103769 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c10d6ea-d3d3-49cf-8185-0b4946edc4be" path="/var/lib/kubelet/pods/5c10d6ea-d3d3-49cf-8185-0b4946edc4be/volumes" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.760358 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.849858 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.849903 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"53c0f3d53985bd14e3af60b8e4782ac2a173cbd158ba349d861221960b04dc9d"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851299 4984 generic.go:334] "Generic (PLEG): container finished" podID="d02652b8-5031-4209-b2e7-228742c7a308" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" exitCode=137 Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851379 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerDied","Data":"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851393 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851411 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d02652b8-5031-4209-b2e7-228742c7a308","Type":"ContainerDied","Data":"af5529722b6cfea0d21c483516240a6c61e08bb8fa1bfc0ece4e5fb90209726f"} Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.851429 4984 scope.go:117] "RemoveContainer" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.899971 4984 scope.go:117] "RemoveContainer" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" Jan 30 10:33:44 crc kubenswrapper[4984]: E0130 10:33:44.900455 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940\": container with ID starting with 3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940 not found: ID does not exist" containerID="3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.900503 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940"} err="failed to get container status \"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940\": rpc error: code = NotFound desc = could not find container \"3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940\": container with ID starting with 3daa3951e07f454681a2a637b1148528e85b9c85c387f939b1749f2452133940 not found: ID does not exist" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.916159 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") pod \"d02652b8-5031-4209-b2e7-228742c7a308\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.916639 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") pod \"d02652b8-5031-4209-b2e7-228742c7a308\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.916732 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") pod \"d02652b8-5031-4209-b2e7-228742c7a308\" (UID: \"d02652b8-5031-4209-b2e7-228742c7a308\") " Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.921449 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv" (OuterVolumeSpecName: "kube-api-access-4klqv") pod "d02652b8-5031-4209-b2e7-228742c7a308" (UID: "d02652b8-5031-4209-b2e7-228742c7a308"). InnerVolumeSpecName "kube-api-access-4klqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.940755 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data" (OuterVolumeSpecName: "config-data") pod "d02652b8-5031-4209-b2e7-228742c7a308" (UID: "d02652b8-5031-4209-b2e7-228742c7a308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:44 crc kubenswrapper[4984]: I0130 10:33:44.946656 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d02652b8-5031-4209-b2e7-228742c7a308" (UID: "d02652b8-5031-4209-b2e7-228742c7a308"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.018500 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.018531 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02652b8-5031-4209-b2e7-228742c7a308-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.018543 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4klqv\" (UniqueName: \"kubernetes.io/projected/d02652b8-5031-4209-b2e7-228742c7a308-kube-api-access-4klqv\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.034517 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.035022 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.039409 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.040648 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.295627 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.312291 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.324771 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: E0130 10:33:45.325441 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.325469 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.325689 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02652b8-5031-4209-b2e7-228742c7a308" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.326482 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.337513 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.337900 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.339013 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.353404 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.430887 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431021 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431075 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431108 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.431212 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2q5\" (UniqueName: \"kubernetes.io/projected/3933f23e-210c-483f-82ec-eb0cdbc09f4c-kube-api-access-bk2q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532801 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532880 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.532954 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2q5\" (UniqueName: \"kubernetes.io/projected/3933f23e-210c-483f-82ec-eb0cdbc09f4c-kube-api-access-bk2q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.533022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.537062 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.537384 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.537851 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.539557 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3933f23e-210c-483f-82ec-eb0cdbc09f4c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.548005 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2q5\" (UniqueName: \"kubernetes.io/projected/3933f23e-210c-483f-82ec-eb0cdbc09f4c-kube-api-access-bk2q5\") pod \"nova-cell1-novncproxy-0\" (UID: \"3933f23e-210c-483f-82ec-eb0cdbc09f4c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.652113 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.965211 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc"} Jan 30 10:33:45 crc kubenswrapper[4984]: I0130 10:33:45.965557 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.030574 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.197789 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02652b8-5031-4209-b2e7-228742c7a308" path="/var/lib/kubelet/pods/d02652b8-5031-4209-b2e7-228742c7a308/volumes" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.232206 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.234222 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.242307 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:33:46 crc kubenswrapper[4984]: W0130 10:33:46.328746 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3933f23e_210c_483f_82ec_eb0cdbc09f4c.slice/crio-f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54 WatchSource:0}: Error finding container f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54: Status 404 returned error can't find the container with id f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54 Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.331343 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369269 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369378 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369449 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369471 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.369567 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.470550 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471626 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471680 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471739 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471764 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471800 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.471821 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.472642 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.473797 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.473955 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.474230 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.474415 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.491355 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"dnsmasq-dns-cd5cbd7b9-cct6l\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.567669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.980975 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3933f23e-210c-483f-82ec-eb0cdbc09f4c","Type":"ContainerStarted","Data":"213bdfe85637ff917d49ef7851de52eda84aa268b35f0e0bf7c6811943ca822f"} Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.981387 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3933f23e-210c-483f-82ec-eb0cdbc09f4c","Type":"ContainerStarted","Data":"f38423796de70a966ca9b707d7e9fd6553a8885ee13fc06f3a18025f53dc4d54"} Jan 30 10:33:46 crc kubenswrapper[4984]: I0130 10:33:46.994531 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291"} Jan 30 10:33:47 crc kubenswrapper[4984]: I0130 10:33:47.014233 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.014217504 podStartE2EDuration="2.014217504s" podCreationTimestamp="2026-01-30 10:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:47.004866842 +0000 UTC m=+1331.571170666" watchObservedRunningTime="2026-01-30 10:33:47.014217504 +0000 UTC m=+1331.580521328" Jan 30 10:33:47 crc kubenswrapper[4984]: I0130 10:33:47.053333 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.011582 4984 generic.go:334] "Generic (PLEG): container finished" podID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerID="6f684411c439001a58a467c45183371d748f6a158f135c5dea4ecaa3e03b6d12" exitCode=0 Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.011650 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerDied","Data":"6f684411c439001a58a467c45183371d748f6a158f135c5dea4ecaa3e03b6d12"} Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.012208 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerStarted","Data":"3a743fd4af77fa8320a0aa82fc1ee65e702a095968f1a2be7dbc346d0b4f3fe2"} Jan 30 10:33:48 crc kubenswrapper[4984]: I0130 10:33:48.071280 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.023101 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerStarted","Data":"e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1"} Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.023526 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.035907 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerStarted","Data":"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1"} Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.036019 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.036167 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" containerID="cri-o://af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" gracePeriod=30 Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.036189 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" containerID="cri-o://a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" gracePeriod=30 Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.063975 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" podStartSLOduration=3.063956565 podStartE2EDuration="3.063956565s" podCreationTimestamp="2026-01-30 10:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:49.056741781 +0000 UTC m=+1333.623045605" watchObservedRunningTime="2026-01-30 10:33:49.063956565 +0000 UTC m=+1333.630260389" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.085987 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.069671329 podStartE2EDuration="7.085970309s" podCreationTimestamp="2026-01-30 10:33:42 +0000 UTC" firstStartedPulling="2026-01-30 10:33:43.844279171 +0000 UTC m=+1328.410582995" lastFinishedPulling="2026-01-30 10:33:47.860578131 +0000 UTC m=+1332.426881975" observedRunningTime="2026-01-30 10:33:49.084040977 +0000 UTC m=+1333.650344801" watchObservedRunningTime="2026-01-30 10:33:49.085970309 +0000 UTC m=+1333.652274133" Jan 30 10:33:49 crc kubenswrapper[4984]: I0130 10:33:49.157082 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:50 crc kubenswrapper[4984]: I0130 10:33:50.046062 4984 generic.go:334] "Generic (PLEG): container finished" podID="808d797f-903f-4730-a470-4f78f53409ae" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" exitCode=143 Jan 30 10:33:50 crc kubenswrapper[4984]: I0130 10:33:50.046150 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerDied","Data":"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4"} Jan 30 10:33:50 crc kubenswrapper[4984]: I0130 10:33:50.652738 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055361 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" containerID="cri-o://7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" gracePeriod=30 Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055431 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" containerID="cri-o://4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" gracePeriod=30 Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055470 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" containerID="cri-o://ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" gracePeriod=30 Jan 30 10:33:51 crc kubenswrapper[4984]: I0130 10:33:51.055501 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" containerID="cri-o://b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" gracePeriod=30 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.078965 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" exitCode=0 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079299 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" exitCode=2 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079312 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" exitCode=0 Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079007 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1"} Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079349 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291"} Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.079364 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc"} Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.725303 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.806854 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.806967 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.807109 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.807203 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") pod \"808d797f-903f-4730-a470-4f78f53409ae\" (UID: \"808d797f-903f-4730-a470-4f78f53409ae\") " Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.807557 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs" (OuterVolumeSpecName: "logs") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.808164 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/808d797f-903f-4730-a470-4f78f53409ae-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.824484 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r" (OuterVolumeSpecName: "kube-api-access-x762r") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "kube-api-access-x762r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.858363 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910428 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data" (OuterVolumeSpecName: "config-data") pod "808d797f-903f-4730-a470-4f78f53409ae" (UID: "808d797f-903f-4730-a470-4f78f53409ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910741 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910783 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/808d797f-903f-4730-a470-4f78f53409ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:52 crc kubenswrapper[4984]: I0130 10:33:52.910795 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x762r\" (UniqueName: \"kubernetes.io/projected/808d797f-903f-4730-a470-4f78f53409ae-kube-api-access-x762r\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.092086 4984 generic.go:334] "Generic (PLEG): container finished" podID="808d797f-903f-4730-a470-4f78f53409ae" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" exitCode=0 Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.092162 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.092925 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerDied","Data":"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791"} Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.097459 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"808d797f-903f-4730-a470-4f78f53409ae","Type":"ContainerDied","Data":"e6ab1055689e7b9bb78f73eb3d8714ea06e2dfcf71a11789552506a676e89f36"} Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.097502 4984 scope.go:117] "RemoveContainer" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.129650 4984 scope.go:117] "RemoveContainer" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.151462 4984 scope.go:117] "RemoveContainer" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.152031 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791\": container with ID starting with a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791 not found: ID does not exist" containerID="a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.152123 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791"} err="failed to get container status \"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791\": rpc error: code = NotFound desc = could not find container \"a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791\": container with ID starting with a954d5721d39e870d0b320e7158471bc2d7fc4a6d76394b219553bac8bc1d791 not found: ID does not exist" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.152192 4984 scope.go:117] "RemoveContainer" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.152694 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4\": container with ID starting with af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4 not found: ID does not exist" containerID="af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.152728 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4"} err="failed to get container status \"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4\": rpc error: code = NotFound desc = could not find container \"af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4\": container with ID starting with af99d5afc45e67005fad219308fb395ed0f3e820dbcb9732b659b5dc1c9aada4 not found: ID does not exist" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.162334 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.170993 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.179422 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.181559 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181589 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" Jan 30 10:33:53 crc kubenswrapper[4984]: E0130 10:33:53.181615 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181624 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181900 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-log" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.181935 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="808d797f-903f-4730-a470-4f78f53409ae" containerName="nova-api-api" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.183167 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.190730 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.190831 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.190852 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.191024 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325437 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325848 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325872 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.325937 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.326160 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.326230 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.437804 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.437933 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.437957 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.438058 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.438126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.438156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.440980 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.445446 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.454560 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.458133 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.459647 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.462842 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.510701 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.633287 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744409 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744490 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744533 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744601 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744688 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744776 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744837 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.744866 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") pod \"8845436d-e0d5-400d-bb54-18e9ffcb036f\" (UID: \"8845436d-e0d5-400d-bb54-18e9ffcb036f\") " Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.746161 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.746569 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.749508 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw" (OuterVolumeSpecName: "kube-api-access-8s9mw") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "kube-api-access-8s9mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.750030 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts" (OuterVolumeSpecName: "scripts") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.784178 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.811277 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847043 4984 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847079 4984 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847095 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847107 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847120 4984 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8845436d-e0d5-400d-bb54-18e9ffcb036f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.847132 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s9mw\" (UniqueName: \"kubernetes.io/projected/8845436d-e0d5-400d-bb54-18e9ffcb036f-kube-api-access-8s9mw\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.851030 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data" (OuterVolumeSpecName: "config-data") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.853515 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8845436d-e0d5-400d-bb54-18e9ffcb036f" (UID: "8845436d-e0d5-400d-bb54-18e9ffcb036f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.949463 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:53 crc kubenswrapper[4984]: I0130 10:33:53.949519 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845436d-e0d5-400d-bb54-18e9ffcb036f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.008232 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.112649 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808d797f-903f-4730-a470-4f78f53409ae" path="/var/lib/kubelet/pods/808d797f-903f-4730-a470-4f78f53409ae/volumes" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.112852 4984 generic.go:334] "Generic (PLEG): container finished" podID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" exitCode=0 Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.112970 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.114864 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerStarted","Data":"4c5623a8a80ebe39b23eb64dff70541e70da513adb322fa0a48cea9507d68a53"} Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.115237 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068"} Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.115661 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8845436d-e0d5-400d-bb54-18e9ffcb036f","Type":"ContainerDied","Data":"53c0f3d53985bd14e3af60b8e4782ac2a173cbd158ba349d861221960b04dc9d"} Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.115694 4984 scope.go:117] "RemoveContainer" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.171409 4984 scope.go:117] "RemoveContainer" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.176512 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.193442 4984 scope.go:117] "RemoveContainer" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.193607 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205104 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205737 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205767 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205805 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205817 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205857 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205870 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.205899 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.205910 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206133 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-central-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206186 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="ceilometer-notification-agent" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206207 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="sg-core" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.206228 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" containerName="proxy-httpd" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.208489 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.210868 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.211550 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.212449 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.224330 4984 scope.go:117] "RemoveContainer" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.235340 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.251677 4984 scope.go:117] "RemoveContainer" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.252163 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1\": container with ID starting with ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1 not found: ID does not exist" containerID="ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252217 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1"} err="failed to get container status \"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1\": rpc error: code = NotFound desc = could not find container \"ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1\": container with ID starting with ecea949b58b037f66681a6993eeb40b1ce6836e781411a268507914395de65b1 not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252238 4984 scope.go:117] "RemoveContainer" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.252703 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291\": container with ID starting with 4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291 not found: ID does not exist" containerID="4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252744 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291"} err="failed to get container status \"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291\": rpc error: code = NotFound desc = could not find container \"4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291\": container with ID starting with 4a490f9155cf58a23190068d9f440deaad98be591dfdedcb6e6e19e527382291 not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.252773 4984 scope.go:117] "RemoveContainer" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.253135 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc\": container with ID starting with b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc not found: ID does not exist" containerID="b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.253162 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc"} err="failed to get container status \"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc\": rpc error: code = NotFound desc = could not find container \"b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc\": container with ID starting with b352afcc2c9d99fc61719be5949bb95017af6bcbe8fc54d69afea22f7126f6fc not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.253182 4984 scope.go:117] "RemoveContainer" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" Jan 30 10:33:54 crc kubenswrapper[4984]: E0130 10:33:54.253522 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068\": container with ID starting with 7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068 not found: ID does not exist" containerID="7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.253543 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068"} err="failed to get container status \"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068\": rpc error: code = NotFound desc = could not find container \"7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068\": container with ID starting with 7dbc08e898e04ebf58b65549c071b6f38171a3128c56800cd9a0a1d358116068 not found: ID does not exist" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359139 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-run-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359181 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359220 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359584 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-config-data\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359806 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359886 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvfj\" (UniqueName: \"kubernetes.io/projected/aa8fceae-cb31-48dd-8104-9a905f788af6-kube-api-access-9kvfj\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.359950 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-log-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.360030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-scripts\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.461854 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-config-data\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462036 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462099 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvfj\" (UniqueName: \"kubernetes.io/projected/aa8fceae-cb31-48dd-8104-9a905f788af6-kube-api-access-9kvfj\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462164 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-log-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462235 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-scripts\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462405 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-run-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462458 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.462531 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.463235 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-log-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.466575 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa8fceae-cb31-48dd-8104-9a905f788af6-run-httpd\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.467400 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-scripts\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.475163 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.476041 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.476763 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-config-data\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.479309 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8fceae-cb31-48dd-8104-9a905f788af6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.482475 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvfj\" (UniqueName: \"kubernetes.io/projected/aa8fceae-cb31-48dd-8104-9a905f788af6-kube-api-access-9kvfj\") pod \"ceilometer-0\" (UID: \"aa8fceae-cb31-48dd-8104-9a905f788af6\") " pod="openstack/ceilometer-0" Jan 30 10:33:54 crc kubenswrapper[4984]: I0130 10:33:54.531086 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.037610 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 10:33:55 crc kubenswrapper[4984]: W0130 10:33:55.042440 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa8fceae_cb31_48dd_8104_9a905f788af6.slice/crio-e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933 WatchSource:0}: Error finding container e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933: Status 404 returned error can't find the container with id e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933 Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.121917 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerStarted","Data":"f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978"} Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.121959 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerStarted","Data":"6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f"} Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.126173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"e945f18c1f1e5986b211fa4eaa6dc5eaada98507d846b59fdc28efec9bcd6933"} Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.139999 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.139982006 podStartE2EDuration="2.139982006s" podCreationTimestamp="2026-01-30 10:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:55.139558975 +0000 UTC m=+1339.705862799" watchObservedRunningTime="2026-01-30 10:33:55.139982006 +0000 UTC m=+1339.706285830" Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.652906 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:55 crc kubenswrapper[4984]: I0130 10:33:55.670661 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.101553 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8845436d-e0d5-400d-bb54-18e9ffcb036f" path="/var/lib/kubelet/pods/8845436d-e0d5-400d-bb54-18e9ffcb036f/volumes" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.135922 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"d5bd4b62675a63131fb10128259c68f2b8a481a886085edc14e1177bb2781fd6"} Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.150242 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.343948 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.345337 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.350531 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.351030 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.364509 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511156 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511299 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511418 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.511711 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.569403 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620571 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620625 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620668 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.620905 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.629935 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.631211 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.642869 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.643074 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"nova-cell1-cell-mapping-489nm\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.644415 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.644640 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" containerID="cri-o://dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f" gracePeriod=10 Jan 30 10:33:56 crc kubenswrapper[4984]: I0130 10:33:56.758879 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.182578 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"4d33293ab1474476948154b916031a2bd87df947166d1d0df66c9f2a4ce3c9f7"} Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.189512 4984 generic.go:334] "Generic (PLEG): container finished" podID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerID="dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f" exitCode=0 Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.189595 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerDied","Data":"dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f"} Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.286480 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461621 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461791 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461896 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.461975 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.462031 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.462078 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") pod \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\" (UID: \"2bea2708-4bb8-48d3-ba2a-0b28a921c053\") " Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.486487 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72" (OuterVolumeSpecName: "kube-api-access-78r72") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "kube-api-access-78r72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.519190 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:33:57 crc kubenswrapper[4984]: W0130 10:33:57.519957 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda005f64f_9ec0_4a4a_b64e_9ae00924dce7.slice/crio-b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9 WatchSource:0}: Error finding container b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9: Status 404 returned error can't find the container with id b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9 Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.534967 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.536789 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config" (OuterVolumeSpecName: "config") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.559034 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564183 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564441 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564469 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78r72\" (UniqueName: \"kubernetes.io/projected/2bea2708-4bb8-48d3-ba2a-0b28a921c053-kube-api-access-78r72\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564481 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564490 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.564499 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.597551 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2bea2708-4bb8-48d3-ba2a-0b28a921c053" (UID: "2bea2708-4bb8-48d3-ba2a-0b28a921c053"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:33:57 crc kubenswrapper[4984]: I0130 10:33:57.666210 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2bea2708-4bb8-48d3-ba2a-0b28a921c053-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.203992 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"03ef2dbacbf8edaa5e2363012639f1cbc9bed34f5839518cd1ff06c6fe10ae8a"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.205735 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" event={"ID":"2bea2708-4bb8-48d3-ba2a-0b28a921c053","Type":"ContainerDied","Data":"d321da41062e4b6042ed3a9bb6a7b9877923a06f6b266f1b243b188fd84ea8bc"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.205796 4984 scope.go:117] "RemoveContainer" containerID="dc0780c922de50ac13d4207b18fc46385e63689786e093b540d855ea0f201f0f" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.205955 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-k2jmh" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.211424 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerStarted","Data":"899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.211480 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerStarted","Data":"b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9"} Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.241273 4984 scope.go:117] "RemoveContainer" containerID="aa1f69e5832486947c309113f3fb6a6493f2b91d3f8828fd6cfe76af73d8b0a8" Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.244709 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.259022 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-k2jmh"] Jan 30 10:33:58 crc kubenswrapper[4984]: I0130 10:33:58.262775 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-489nm" podStartSLOduration=2.262760007 podStartE2EDuration="2.262760007s" podCreationTimestamp="2026-01-30 10:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:33:58.244726501 +0000 UTC m=+1342.811030345" watchObservedRunningTime="2026-01-30 10:33:58.262760007 +0000 UTC m=+1342.829063831" Jan 30 10:33:59 crc kubenswrapper[4984]: I0130 10:33:59.224839 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa8fceae-cb31-48dd-8104-9a905f788af6","Type":"ContainerStarted","Data":"2c2e2bb79ec3040a4ee01008f1909f5acb8f2c030b8c3dbd1d7e91118279ad0e"} Jan 30 10:33:59 crc kubenswrapper[4984]: I0130 10:33:59.225791 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 10:33:59 crc kubenswrapper[4984]: I0130 10:33:59.260557 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.432503524 podStartE2EDuration="5.260532996s" podCreationTimestamp="2026-01-30 10:33:54 +0000 UTC" firstStartedPulling="2026-01-30 10:33:55.046494585 +0000 UTC m=+1339.612798409" lastFinishedPulling="2026-01-30 10:33:58.874524047 +0000 UTC m=+1343.440827881" observedRunningTime="2026-01-30 10:33:59.249976223 +0000 UTC m=+1343.816280047" watchObservedRunningTime="2026-01-30 10:33:59.260532996 +0000 UTC m=+1343.826836820" Jan 30 10:34:00 crc kubenswrapper[4984]: I0130 10:34:00.105007 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" path="/var/lib/kubelet/pods/2bea2708-4bb8-48d3-ba2a-0b28a921c053/volumes" Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.000355 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.000969 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.295024 4984 generic.go:334] "Generic (PLEG): container finished" podID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerID="899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3" exitCode=0 Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.295106 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerDied","Data":"899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3"} Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.511439 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:03 crc kubenswrapper[4984]: I0130 10:34:03.511852 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.525502 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.525516 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.698677 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796160 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796294 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.796400 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") pod \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\" (UID: \"a005f64f-9ec0-4a4a-b64e-9ae00924dce7\") " Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.804913 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb" (OuterVolumeSpecName: "kube-api-access-tmsbb") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "kube-api-access-tmsbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.806776 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts" (OuterVolumeSpecName: "scripts") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.831373 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data" (OuterVolumeSpecName: "config-data") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.851867 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a005f64f-9ec0-4a4a-b64e-9ae00924dce7" (UID: "a005f64f-9ec0-4a4a-b64e-9ae00924dce7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899230 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmsbb\" (UniqueName: \"kubernetes.io/projected/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-kube-api-access-tmsbb\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899283 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899293 4984 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:04 crc kubenswrapper[4984]: I0130 10:34:04.899301 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a005f64f-9ec0-4a4a-b64e-9ae00924dce7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.316506 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-489nm" event={"ID":"a005f64f-9ec0-4a4a-b64e-9ae00924dce7","Type":"ContainerDied","Data":"b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9"} Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.316568 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48be6f7d168157e0f6f1093437d7eac3c3363db05003800520fbe777e5653a9" Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.316598 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-489nm" Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.517197 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.517487 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" containerID="cri-o://6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.517578 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" containerID="cri-o://f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.550868 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.551180 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" containerID="cri-o://4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.599022 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.599289 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" containerID="cri-o://07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75" gracePeriod=30 Jan 30 10:34:05 crc kubenswrapper[4984]: I0130 10:34:05.599340 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" containerID="cri-o://c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe" gracePeriod=30 Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.329443 4984 generic.go:334] "Generic (PLEG): container finished" podID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerID="07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75" exitCode=143 Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.329532 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerDied","Data":"07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75"} Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.332448 4984 generic.go:334] "Generic (PLEG): container finished" podID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerID="6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f" exitCode=143 Jan 30 10:34:06 crc kubenswrapper[4984]: I0130 10:34:06.332495 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerDied","Data":"6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f"} Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.131748 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.247682 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") pod \"50f8d034-e2e3-4db8-85b8-00459162d5ef\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.247740 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") pod \"50f8d034-e2e3-4db8-85b8-00459162d5ef\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.247778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") pod \"50f8d034-e2e3-4db8-85b8-00459162d5ef\" (UID: \"50f8d034-e2e3-4db8-85b8-00459162d5ef\") " Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.253698 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9" (OuterVolumeSpecName: "kube-api-access-xkfw9") pod "50f8d034-e2e3-4db8-85b8-00459162d5ef" (UID: "50f8d034-e2e3-4db8-85b8-00459162d5ef"). InnerVolumeSpecName "kube-api-access-xkfw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.281131 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50f8d034-e2e3-4db8-85b8-00459162d5ef" (UID: "50f8d034-e2e3-4db8-85b8-00459162d5ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.283536 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data" (OuterVolumeSpecName: "config-data") pod "50f8d034-e2e3-4db8-85b8-00459162d5ef" (UID: "50f8d034-e2e3-4db8-85b8-00459162d5ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342589 4984 generic.go:334] "Generic (PLEG): container finished" podID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" exitCode=0 Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342636 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerDied","Data":"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee"} Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342663 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50f8d034-e2e3-4db8-85b8-00459162d5ef","Type":"ContainerDied","Data":"20034d08b26d3a30783bbce5201d3ea232d38b42631b83ce6cb81264f022a2a3"} Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342681 4984 scope.go:117] "RemoveContainer" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.342812 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.349847 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkfw9\" (UniqueName: \"kubernetes.io/projected/50f8d034-e2e3-4db8-85b8-00459162d5ef-kube-api-access-xkfw9\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.349892 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.349906 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50f8d034-e2e3-4db8-85b8-00459162d5ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.369304 4984 scope.go:117] "RemoveContainer" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.369736 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee\": container with ID starting with 4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee not found: ID does not exist" containerID="4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.369775 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee"} err="failed to get container status \"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee\": rpc error: code = NotFound desc = could not find container \"4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee\": container with ID starting with 4f7539cf39519404eede3ec9b2e49ab2270c4f3e360c765a4f53c0c2504bf4ee not found: ID does not exist" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.379495 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.392753 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.404579 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405011 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405055 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405104 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405113 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405126 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="init" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405133 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="init" Jan 30 10:34:07 crc kubenswrapper[4984]: E0130 10:34:07.405146 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerName="nova-manage" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405154 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerName="nova-manage" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405396 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bea2708-4bb8-48d3-ba2a-0b28a921c053" containerName="dnsmasq-dns" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405434 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" containerName="nova-manage" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.405445 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" containerName="nova-scheduler-scheduler" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.406057 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.407806 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.421279 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.452669 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-config-data\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.452734 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.452812 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvxn\" (UniqueName: \"kubernetes.io/projected/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-kube-api-access-7hvxn\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.553980 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.554028 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvxn\" (UniqueName: \"kubernetes.io/projected/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-kube-api-access-7hvxn\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.554178 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-config-data\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.559410 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.569527 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-config-data\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.576619 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvxn\" (UniqueName: \"kubernetes.io/projected/d79d0dc1-f229-4dd7-9d7c-a0e420d6452d-kube-api-access-7hvxn\") pod \"nova-scheduler-0\" (UID: \"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d\") " pod="openstack/nova-scheduler-0" Jan 30 10:34:07 crc kubenswrapper[4984]: I0130 10:34:07.726751 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 10:34:08 crc kubenswrapper[4984]: I0130 10:34:08.100142 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f8d034-e2e3-4db8-85b8-00459162d5ef" path="/var/lib/kubelet/pods/50f8d034-e2e3-4db8-85b8-00459162d5ef/volumes" Jan 30 10:34:08 crc kubenswrapper[4984]: I0130 10:34:08.205633 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 10:34:08 crc kubenswrapper[4984]: I0130 10:34:08.355319 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d","Type":"ContainerStarted","Data":"7d749bda7e7a7bca71d05d7f9b3f8db75416cdbea8d46fb7f8b3052ddd84f2a5"} Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.039982 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:52814->10.217.0.195:8775: read: connection reset by peer" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.039978 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:52826->10.217.0.195:8775: read: connection reset by peer" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.365352 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d79d0dc1-f229-4dd7-9d7c-a0e420d6452d","Type":"ContainerStarted","Data":"5d25881e88c2b6ad6ee30ea1a57f43e5ca686440ff039679a2280372e21a5f14"} Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.368043 4984 generic.go:334] "Generic (PLEG): container finished" podID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerID="c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe" exitCode=0 Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.368082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerDied","Data":"c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe"} Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.385688 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.385631412 podStartE2EDuration="2.385631412s" podCreationTimestamp="2026-01-30 10:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:09.381593973 +0000 UTC m=+1353.947897817" watchObservedRunningTime="2026-01-30 10:34:09.385631412 +0000 UTC m=+1353.951935236" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.486597 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.592115 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593055 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593293 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593352 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.593381 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") pod \"a40bafb7-7a35-49bc-aaed-9249967a6da1\" (UID: \"a40bafb7-7a35-49bc-aaed-9249967a6da1\") " Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.594281 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs" (OuterVolumeSpecName: "logs") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.627373 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt" (OuterVolumeSpecName: "kube-api-access-t2xzt") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "kube-api-access-t2xzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.669421 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.679418 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data" (OuterVolumeSpecName: "config-data") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.688412 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a40bafb7-7a35-49bc-aaed-9249967a6da1" (UID: "a40bafb7-7a35-49bc-aaed-9249967a6da1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696437 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2xzt\" (UniqueName: \"kubernetes.io/projected/a40bafb7-7a35-49bc-aaed-9249967a6da1-kube-api-access-t2xzt\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696476 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696487 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696499 4984 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40bafb7-7a35-49bc-aaed-9249967a6da1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:09 crc kubenswrapper[4984]: I0130 10:34:09.696508 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40bafb7-7a35-49bc-aaed-9249967a6da1-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.384469 4984 generic.go:334] "Generic (PLEG): container finished" podID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerID="f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978" exitCode=0 Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.384537 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerDied","Data":"f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978"} Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.385116 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485","Type":"ContainerDied","Data":"4c5623a8a80ebe39b23eb64dff70541e70da513adb322fa0a48cea9507d68a53"} Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.385135 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c5623a8a80ebe39b23eb64dff70541e70da513adb322fa0a48cea9507d68a53" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.387462 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a40bafb7-7a35-49bc-aaed-9249967a6da1","Type":"ContainerDied","Data":"390999e98246286ffd4c0bd564da8feefb3e4a9999b850c5b407a8c678c5ab72"} Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.387475 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.387518 4984 scope.go:117] "RemoveContainer" containerID="c8f4476e155cc8c473654cc977453e8e2dcef98753afcbc2bc1176aba4b862fe" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.392438 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.418675 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.436537 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.442555 4984 scope.go:117] "RemoveContainer" containerID="07c50ccbb0c5e45151b2285028ccff4f5761ad52e7820d2fa36ac711c3030e75" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460125 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460520 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460539 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460557 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460572 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460587 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460595 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" Jan 30 10:34:10 crc kubenswrapper[4984]: E0130 10:34:10.460612 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460618 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460775 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-metadata" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460787 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" containerName="nova-metadata-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460807 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-log" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.460819 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" containerName="nova-api-api" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.461786 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.465629 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.468919 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.495126 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519169 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519234 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519346 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519383 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519413 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.519530 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") pod \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\" (UID: \"d8fcff2a-87bc-4f28-8d8e-f0c694a0a485\") " Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520172 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs" (OuterVolumeSpecName: "logs") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520273 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skc88\" (UniqueName: \"kubernetes.io/projected/0538ab81-6e35-473d-860f-7f680671646d-kube-api-access-skc88\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520345 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520451 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-config-data\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520584 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520627 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0538ab81-6e35-473d-860f-7f680671646d-logs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.520697 4984 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-logs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.538983 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5" (OuterVolumeSpecName: "kube-api-access-bwsv5") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "kube-api-access-bwsv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.547468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data" (OuterVolumeSpecName: "config-data") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.548321 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.567997 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.592402 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" (UID: "d8fcff2a-87bc-4f28-8d8e-f0c694a0a485"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622651 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622708 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0538ab81-6e35-473d-860f-7f680671646d-logs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622746 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skc88\" (UniqueName: \"kubernetes.io/projected/0538ab81-6e35-473d-860f-7f680671646d-kube-api-access-skc88\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622783 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622845 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-config-data\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622889 4984 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622902 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622912 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwsv5\" (UniqueName: \"kubernetes.io/projected/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-kube-api-access-bwsv5\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622922 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.622930 4984 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.623568 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0538ab81-6e35-473d-860f-7f680671646d-logs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.626510 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-config-data\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.630566 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.632372 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0538ab81-6e35-473d-860f-7f680671646d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.639226 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skc88\" (UniqueName: \"kubernetes.io/projected/0538ab81-6e35-473d-860f-7f680671646d-kube-api-access-skc88\") pod \"nova-metadata-0\" (UID: \"0538ab81-6e35-473d-860f-7f680671646d\") " pod="openstack/nova-metadata-0" Jan 30 10:34:10 crc kubenswrapper[4984]: I0130 10:34:10.788861 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.234818 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: W0130 10:34:11.237923 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0538ab81_6e35_473d_860f_7f680671646d.slice/crio-5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718 WatchSource:0}: Error finding container 5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718: Status 404 returned error can't find the container with id 5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718 Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.398650 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0538ab81-6e35-473d-860f-7f680671646d","Type":"ContainerStarted","Data":"5e9e1ef98d4e412ec4e9e2b85c4abdc1901f2ea8bea18bcd640d7473b36a2718"} Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.398692 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.437407 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.458645 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.478194 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.480147 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.499880 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.500055 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.500265 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.533344 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.542967 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543024 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-public-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543108 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8b1830-c479-4612-a461-7cb46d2c949f-logs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543139 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-config-data\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.543231 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mvd\" (UniqueName: \"kubernetes.io/projected/2a8b1830-c479-4612-a461-7cb46d2c949f-kube-api-access-v8mvd\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.544038 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647067 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647148 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647171 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-public-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647217 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8b1830-c479-4612-a461-7cb46d2c949f-logs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-config-data\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.647411 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mvd\" (UniqueName: \"kubernetes.io/projected/2a8b1830-c479-4612-a461-7cb46d2c949f-kube-api-access-v8mvd\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.648571 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8b1830-c479-4612-a461-7cb46d2c949f-logs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.652073 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.652143 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.652091 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-public-tls-certs\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.653120 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8b1830-c479-4612-a461-7cb46d2c949f-config-data\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.662496 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mvd\" (UniqueName: \"kubernetes.io/projected/2a8b1830-c479-4612-a461-7cb46d2c949f-kube-api-access-v8mvd\") pod \"nova-api-0\" (UID: \"2a8b1830-c479-4612-a461-7cb46d2c949f\") " pod="openstack/nova-api-0" Jan 30 10:34:11 crc kubenswrapper[4984]: I0130 10:34:11.904850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.105708 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40bafb7-7a35-49bc-aaed-9249967a6da1" path="/var/lib/kubelet/pods/a40bafb7-7a35-49bc-aaed-9249967a6da1/volumes" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.106802 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8fcff2a-87bc-4f28-8d8e-f0c694a0a485" path="/var/lib/kubelet/pods/d8fcff2a-87bc-4f28-8d8e-f0c694a0a485/volumes" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.408426 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0538ab81-6e35-473d-860f-7f680671646d","Type":"ContainerStarted","Data":"a8335fd2f11b9193688e84eb0431848813337f8f3c6d75ec631934bf7546301d"} Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.408475 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0538ab81-6e35-473d-860f-7f680671646d","Type":"ContainerStarted","Data":"4f38fabe5cf4a86cbd340031aaf5f118100f8b6a5a9fa7a12e6ed406075899d3"} Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.433171 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.433148023 podStartE2EDuration="2.433148023s" podCreationTimestamp="2026-01-30 10:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:12.424717226 +0000 UTC m=+1356.991021070" watchObservedRunningTime="2026-01-30 10:34:12.433148023 +0000 UTC m=+1356.999451847" Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.445628 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 10:34:12 crc kubenswrapper[4984]: W0130 10:34:12.447151 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a8b1830_c479_4612_a461_7cb46d2c949f.slice/crio-e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515 WatchSource:0}: Error finding container e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515: Status 404 returned error can't find the container with id e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515 Jan 30 10:34:12 crc kubenswrapper[4984]: I0130 10:34:12.727278 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.424401 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a8b1830-c479-4612-a461-7cb46d2c949f","Type":"ContainerStarted","Data":"de62eede36878ed0878f6230a27dfd3a94597ff829a71f7dff2b1d8b31b44d12"} Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.424472 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a8b1830-c479-4612-a461-7cb46d2c949f","Type":"ContainerStarted","Data":"290ee958368f349ab02fd730409b4470f27ac6db778ce926403d11407e9a0846"} Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.424501 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a8b1830-c479-4612-a461-7cb46d2c949f","Type":"ContainerStarted","Data":"e5def199ca03421f7fbff48ff7d51dc3b177e2b4aec5c65e4b7dfed3697bf515"} Jan 30 10:34:13 crc kubenswrapper[4984]: I0130 10:34:13.456535 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.456507873 podStartE2EDuration="2.456507873s" podCreationTimestamp="2026-01-30 10:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:13.45193393 +0000 UTC m=+1358.018237754" watchObservedRunningTime="2026-01-30 10:34:13.456507873 +0000 UTC m=+1358.022811737" Jan 30 10:34:15 crc kubenswrapper[4984]: I0130 10:34:15.789143 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:34:15 crc kubenswrapper[4984]: I0130 10:34:15.789915 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 10:34:17 crc kubenswrapper[4984]: I0130 10:34:17.727496 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 10:34:17 crc kubenswrapper[4984]: I0130 10:34:17.751508 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 10:34:18 crc kubenswrapper[4984]: I0130 10:34:18.511289 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 10:34:20 crc kubenswrapper[4984]: I0130 10:34:20.789562 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:34:20 crc kubenswrapper[4984]: I0130 10:34:20.789913 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.803528 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0538ab81-6e35-473d-860f-7f680671646d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.803543 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0538ab81-6e35-473d-860f-7f680671646d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.906476 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:21 crc kubenswrapper[4984]: I0130 10:34:21.906791 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 10:34:22 crc kubenswrapper[4984]: I0130 10:34:22.922448 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a8b1830-c479-4612-a461-7cb46d2c949f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:22 crc kubenswrapper[4984]: I0130 10:34:22.922445 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a8b1830-c479-4612-a461-7cb46d2c949f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 10:34:24 crc kubenswrapper[4984]: I0130 10:34:24.546340 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.908435 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.908981 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.923355 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:34:30 crc kubenswrapper[4984]: I0130 10:34:30.923453 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.911532 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.911691 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.912396 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.912429 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.920702 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:34:31 crc kubenswrapper[4984]: I0130 10:34:31.923794 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.001113 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.001407 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.001446 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.002117 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.002167 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8" gracePeriod=600 Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623502 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8" exitCode=0 Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623591 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8"} Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623889 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a"} Jan 30 10:34:33 crc kubenswrapper[4984]: I0130 10:34:33.623912 4984 scope.go:117] "RemoveContainer" containerID="337ddd5602bd27299b722ba967592fe0a9b4e69cb264da42e77acc2adb5c1796" Jan 30 10:34:39 crc kubenswrapper[4984]: I0130 10:34:39.529799 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:40 crc kubenswrapper[4984]: I0130 10:34:40.371705 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:44 crc kubenswrapper[4984]: I0130 10:34:44.160176 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" containerID="cri-o://9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" gracePeriod=604796 Jan 30 10:34:44 crc kubenswrapper[4984]: I0130 10:34:44.485547 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" containerID="cri-o://53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a" gracePeriod=604796 Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.242901 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.246976 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.255575 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.335875 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.335962 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.336030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438134 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438201 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438242 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438817 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.438837 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.471071 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"redhat-operators-nc5p7\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:48 crc kubenswrapper[4984]: I0130 10:34:48.574658 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.064709 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.777769 4984 generic.go:334] "Generic (PLEG): container finished" podID="931ec9af-3161-478a-9f45-556b11457731" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" exitCode=0 Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.777884 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292"} Jan 30 10:34:49 crc kubenswrapper[4984]: I0130 10:34:49.778171 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerStarted","Data":"af28661a5979a3c9094f4b828557ffcee62d82e99faaa0eff4776f3226c108b1"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.753623 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787873 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787951 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787972 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.787986 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788019 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788105 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788153 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788194 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788243 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788274 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.788338 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") pod \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\" (UID: \"0e0c1fc2-7876-468d-86b8-7348a8418ee9\") " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.789404 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.799904 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.801539 4984 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.801563 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.808617 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.829993 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.836784 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerStarted","Data":"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.845782 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.848759 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.850689 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v" (OuterVolumeSpecName: "kube-api-access-mzh7v") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "kube-api-access-mzh7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.854766 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info" (OuterVolumeSpecName: "pod-info") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.865221 4984 generic.go:334] "Generic (PLEG): container finished" podID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerID="53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a" exitCode=0 Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.865312 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerDied","Data":"53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891841 4984 generic.go:334] "Generic (PLEG): container finished" podID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" exitCode=0 Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891894 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerDied","Data":"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e0c1fc2-7876-468d-86b8-7348a8418ee9","Type":"ContainerDied","Data":"bcdb7046c71ffa7c47b4451f704154607df933108af67894b8bab478880f2282"} Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.891946 4984 scope.go:117] "RemoveContainer" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.892120 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.907532 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.907561 4984 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0c1fc2-7876-468d-86b8-7348a8418ee9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909119 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909145 4984 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0c1fc2-7876-468d-86b8-7348a8418ee9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909159 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.909171 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh7v\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-kube-api-access-mzh7v\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.936789 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.939571 4984 scope.go:117] "RemoveContainer" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.950771 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data" (OuterVolumeSpecName: "config-data") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.966524 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf" (OuterVolumeSpecName: "server-conf") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.991372 4984 scope.go:117] "RemoveContainer" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" Jan 30 10:34:50 crc kubenswrapper[4984]: E0130 10:34:50.991890 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83\": container with ID starting with 9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83 not found: ID does not exist" containerID="9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.991921 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83"} err="failed to get container status \"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83\": rpc error: code = NotFound desc = could not find container \"9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83\": container with ID starting with 9d71225680103d3f047f4a97098085bc644da1baf81d15e0aafbae32602fdd83 not found: ID does not exist" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.991941 4984 scope.go:117] "RemoveContainer" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" Jan 30 10:34:50 crc kubenswrapper[4984]: E0130 10:34:50.992270 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48\": container with ID starting with f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48 not found: ID does not exist" containerID="f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48" Jan 30 10:34:50 crc kubenswrapper[4984]: I0130 10:34:50.992287 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48"} err="failed to get container status \"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48\": rpc error: code = NotFound desc = could not find container \"f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48\": container with ID starting with f1e31e038106c8ed9aaf4b903d5f930f8c57ae9df060ac40721ba54d45ccfb48 not found: ID does not exist" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.010609 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.010648 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.010659 4984 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0c1fc2-7876-468d-86b8-7348a8418ee9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.015778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0e0c1fc2-7876-468d-86b8-7348a8418ee9" (UID: "0e0c1fc2-7876-468d-86b8-7348a8418ee9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.112746 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0c1fc2-7876-468d-86b8-7348a8418ee9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.149211 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214529 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214647 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214778 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214843 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214942 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.214982 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215016 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215041 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215074 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215104 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.215134 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") pod \"6d00f70a-4071-4375-81f3-45e7aab83cd3\" (UID: \"6d00f70a-4071-4375-81f3-45e7aab83cd3\") " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.217658 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.218037 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.219752 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.220112 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.223827 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info" (OuterVolumeSpecName: "pod-info") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.225488 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.254148 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.259470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj" (OuterVolumeSpecName: "kube-api-access-zl7rj") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "kube-api-access-zl7rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.270155 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.279203 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.308401 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf" (OuterVolumeSpecName: "server-conf") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320670 4984 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d00f70a-4071-4375-81f3-45e7aab83cd3-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320708 4984 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d00f70a-4071-4375-81f3-45e7aab83cd3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320723 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320734 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl7rj\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-kube-api-access-zl7rj\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320761 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320773 4984 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320785 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320796 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.320810 4984 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.328764 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data" (OuterVolumeSpecName: "config-data") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356337 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356846 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356872 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356893 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356902 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356923 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356931 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="setup-container" Jan 30 10:34:51 crc kubenswrapper[4984]: E0130 10:34:51.356950 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.356957 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.357187 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.357214 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" containerName="rabbitmq" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.358469 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.367637 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.367790 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.367909 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.373681 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.381655 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.381924 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4bdkz" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.382203 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.383725 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.387658 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.422598 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.422941 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.422987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423063 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423107 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423146 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-config-data\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423216 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423248 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7sxl\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-kube-api-access-k7sxl\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423319 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/137801a7-4625-4c4c-a855-8ecdf65e509a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423341 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/137801a7-4625-4c4c-a855-8ecdf65e509a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423364 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423422 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d00f70a-4071-4375-81f3-45e7aab83cd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.423437 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.489486 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6d00f70a-4071-4375-81f3-45e7aab83cd3" (UID: "6d00f70a-4071-4375-81f3-45e7aab83cd3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-config-data\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531242 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531284 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7sxl\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-kube-api-access-k7sxl\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531322 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/137801a7-4625-4c4c-a855-8ecdf65e509a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531340 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/137801a7-4625-4c4c-a855-8ecdf65e509a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531360 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531386 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531401 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531452 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531497 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531524 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.531585 4984 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d00f70a-4071-4375-81f3-45e7aab83cd3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.532654 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.533117 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.533799 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-config-data\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.534574 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/137801a7-4625-4c4c-a855-8ecdf65e509a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.535044 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.543228 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/137801a7-4625-4c4c-a855-8ecdf65e509a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.546751 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.554172 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.556911 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/137801a7-4625-4c4c-a855-8ecdf65e509a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.566122 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.575139 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7sxl\" (UniqueName: \"kubernetes.io/projected/137801a7-4625-4c4c-a855-8ecdf65e509a-kube-api-access-k7sxl\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.583240 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"137801a7-4625-4c4c-a855-8ecdf65e509a\") " pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.642217 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.905054 4984 generic.go:334] "Generic (PLEG): container finished" podID="931ec9af-3161-478a-9f45-556b11457731" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" exitCode=0 Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.905283 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355"} Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.907520 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.907608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6d00f70a-4071-4375-81f3-45e7aab83cd3","Type":"ContainerDied","Data":"9c58a9b1d4f5c119ee458328b6410a44a74ab0304cb65dc2347dcff3a9956c83"} Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.907673 4984 scope.go:117] "RemoveContainer" containerID="53b6a8485be115a64c668b0815e3e9bf5afd9c84f8c35f953989ac9d4c68a89a" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.932759 4984 scope.go:117] "RemoveContainer" containerID="627e3b8cc5def8235dcb65072da12abbb346c0ddb7f3ece2aa1c597e5e7a4e73" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.956802 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.967041 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.982798 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.984776 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.986678 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.986909 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.987087 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dmx9d" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.987349 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.988169 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.988355 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 10:34:51 crc kubenswrapper[4984]: I0130 10:34:51.988170 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.002965 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.041937 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnrm\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-kube-api-access-4qnrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042007 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92837592-8d1a-4eec-9c06-1d906b4724c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042030 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042098 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042167 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042196 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042220 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042238 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.042280 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92837592-8d1a-4eec-9c06-1d906b4724c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.100348 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0c1fc2-7876-468d-86b8-7348a8418ee9" path="/var/lib/kubelet/pods/0e0c1fc2-7876-468d-86b8-7348a8418ee9/volumes" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.101084 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d00f70a-4071-4375-81f3-45e7aab83cd3" path="/var/lib/kubelet/pods/6d00f70a-4071-4375-81f3-45e7aab83cd3/volumes" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.112357 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 10:34:52 crc kubenswrapper[4984]: W0130 10:34:52.117985 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137801a7_4625_4c4c_a855_8ecdf65e509a.slice/crio-042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896 WatchSource:0}: Error finding container 042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896: Status 404 returned error can't find the container with id 042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896 Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.159080 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163357 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163454 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92837592-8d1a-4eec-9c06-1d906b4724c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163586 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnrm\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-kube-api-access-4qnrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163756 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92837592-8d1a-4eec-9c06-1d906b4724c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.163814 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164197 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164432 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164550 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164593 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.164666 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.165193 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.165229 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.166923 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.167596 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.169093 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.169335 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.170130 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/92837592-8d1a-4eec-9c06-1d906b4724c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.171003 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/92837592-8d1a-4eec-9c06-1d906b4724c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.176390 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/92837592-8d1a-4eec-9c06-1d906b4724c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.183308 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnrm\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-kube-api-access-4qnrm\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.187447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/92837592-8d1a-4eec-9c06-1d906b4724c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.211096 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"92837592-8d1a-4eec-9c06-1d906b4724c2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:52 crc kubenswrapper[4984]: I0130 10:34:52.306636 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.837810 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 10:34:53 crc kubenswrapper[4984]: W0130 10:34:52.845797 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92837592_8d1a_4eec_9c06_1d906b4724c2.slice/crio-ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f WatchSource:0}: Error finding container ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f: Status 404 returned error can't find the container with id ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.924074 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerStarted","Data":"ad1e9d89d2b9df0be98ba01e9a2e3fa694053763b4b79fc5b2cb76a074c5134f"} Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.925019 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerStarted","Data":"042ac03ec8d6fd4af20feb36dfcacdca477a082bb6fdd99c1d9bc3be54bf4896"} Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.965432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerStarted","Data":"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05"} Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:52.993841 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nc5p7" podStartSLOduration=2.076203017 podStartE2EDuration="4.993820454s" podCreationTimestamp="2026-01-30 10:34:48 +0000 UTC" firstStartedPulling="2026-01-30 10:34:49.779382159 +0000 UTC m=+1394.345685983" lastFinishedPulling="2026-01-30 10:34:52.696999596 +0000 UTC m=+1397.263303420" observedRunningTime="2026-01-30 10:34:52.982652676 +0000 UTC m=+1397.548956520" watchObservedRunningTime="2026-01-30 10:34:52.993820454 +0000 UTC m=+1397.560124278" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.459002 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.461042 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.462871 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.471116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495303 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495377 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495403 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495471 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495612 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495694 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.495725 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.597686 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598067 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598111 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598193 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598274 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.598325 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599282 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599315 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599340 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599558 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599840 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.599904 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.618134 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"dnsmasq-dns-d558885bc-x6tx8\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:53 crc kubenswrapper[4984]: I0130 10:34:53.775722 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:54 crc kubenswrapper[4984]: I0130 10:34:54.013208 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerStarted","Data":"33a1c0ab286308635ce2997101d493bc5bbaf8b33a44aa8dba473c0678633e74"} Jan 30 10:34:54 crc kubenswrapper[4984]: I0130 10:34:54.275291 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.019768 4984 generic.go:334] "Generic (PLEG): container finished" podID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" exitCode=0 Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.019815 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerDied","Data":"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06"} Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.020067 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerStarted","Data":"0046ce7289b143c1afa94a4ee5518f2e0cbb8f236b2ed7fb1318f74a0dfbd833"} Jan 30 10:34:55 crc kubenswrapper[4984]: I0130 10:34:55.021909 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerStarted","Data":"61487bfc0e3e51e3e465639e925a3638c30e7ede1c3eb153d4f8715997633943"} Jan 30 10:34:56 crc kubenswrapper[4984]: I0130 10:34:56.034545 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerStarted","Data":"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b"} Jan 30 10:34:56 crc kubenswrapper[4984]: I0130 10:34:56.055177 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" podStartSLOduration=3.055158546 podStartE2EDuration="3.055158546s" podCreationTimestamp="2026-01-30 10:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:34:56.053637696 +0000 UTC m=+1400.619941520" watchObservedRunningTime="2026-01-30 10:34:56.055158546 +0000 UTC m=+1400.621462370" Jan 30 10:34:57 crc kubenswrapper[4984]: I0130 10:34:57.041750 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:34:58 crc kubenswrapper[4984]: I0130 10:34:58.575352 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:58 crc kubenswrapper[4984]: I0130 10:34:58.576799 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:34:59 crc kubenswrapper[4984]: I0130 10:34:59.628432 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc5p7" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" probeResult="failure" output=< Jan 30 10:34:59 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:34:59 crc kubenswrapper[4984]: > Jan 30 10:35:03 crc kubenswrapper[4984]: I0130 10:35:03.777589 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:35:03 crc kubenswrapper[4984]: I0130 10:35:03.874677 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:35:03 crc kubenswrapper[4984]: I0130 10:35:03.875331 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" containerID="cri-o://e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1" gracePeriod=10 Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.056199 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-fvwt9"] Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.060207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.124512 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-fvwt9"] Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.140517 4984 generic.go:334] "Generic (PLEG): container finished" podID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerID="e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1" exitCode=0 Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.140565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerDied","Data":"e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1"} Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230465 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2ts\" (UniqueName: \"kubernetes.io/projected/f3033afa-9ac2-4f32-a02d-372dcdbeb984-kube-api-access-tv2ts\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230528 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-config\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230566 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230608 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230640 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.230760 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336244 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv2ts\" (UniqueName: \"kubernetes.io/projected/f3033afa-9ac2-4f32-a02d-372dcdbeb984-kube-api-access-tv2ts\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336763 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-config\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336793 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336843 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336891 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.336976 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.337001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.337942 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-config\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.338301 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.338758 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.341885 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.342773 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.343300 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3033afa-9ac2-4f32-a02d-372dcdbeb984-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.357079 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv2ts\" (UniqueName: \"kubernetes.io/projected/f3033afa-9ac2-4f32-a02d-372dcdbeb984-kube-api-access-tv2ts\") pod \"dnsmasq-dns-78c64bc9c5-fvwt9\" (UID: \"f3033afa-9ac2-4f32-a02d-372dcdbeb984\") " pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.389512 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.546439 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642021 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642100 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642162 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642300 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642358 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.642387 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") pod \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\" (UID: \"51b210b6-b9ff-41fd-b06b-77aca8956fb6\") " Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.654421 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b" (OuterVolumeSpecName: "kube-api-access-jld6b") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "kube-api-access-jld6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.744774 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jld6b\" (UniqueName: \"kubernetes.io/projected/51b210b6-b9ff-41fd-b06b-77aca8956fb6-kube-api-access-jld6b\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.754850 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config" (OuterVolumeSpecName: "config") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.757024 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.763651 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.765338 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.766046 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51b210b6-b9ff-41fd-b06b-77aca8956fb6" (UID: "51b210b6-b9ff-41fd-b06b-77aca8956fb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846665 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846698 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846711 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846723 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.846734 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51b210b6-b9ff-41fd-b06b-77aca8956fb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:04 crc kubenswrapper[4984]: I0130 10:35:04.866136 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-fvwt9"] Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.154714 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" event={"ID":"51b210b6-b9ff-41fd-b06b-77aca8956fb6","Type":"ContainerDied","Data":"3a743fd4af77fa8320a0aa82fc1ee65e702a095968f1a2be7dbc346d0b4f3fe2"} Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.155007 4984 scope.go:117] "RemoveContainer" containerID="e0fdb738b1fb1ba9c2379a71a6e54be6dd9797265f710c458c69bafc3eeae7e1" Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.155641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" event={"ID":"f3033afa-9ac2-4f32-a02d-372dcdbeb984","Type":"ContainerStarted","Data":"9ceacd3faf980f439c66b937024148d52844a30641fa1d51ecf875b71c842d50"} Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.156121 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-cct6l" Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.189108 4984 scope.go:117] "RemoveContainer" containerID="6f684411c439001a58a467c45183371d748f6a158f135c5dea4ecaa3e03b6d12" Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.207359 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:35:05 crc kubenswrapper[4984]: I0130 10:35:05.215143 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-cct6l"] Jan 30 10:35:06 crc kubenswrapper[4984]: I0130 10:35:06.101717 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" path="/var/lib/kubelet/pods/51b210b6-b9ff-41fd-b06b-77aca8956fb6/volumes" Jan 30 10:35:06 crc kubenswrapper[4984]: I0130 10:35:06.163773 4984 generic.go:334] "Generic (PLEG): container finished" podID="f3033afa-9ac2-4f32-a02d-372dcdbeb984" containerID="37aa86f993c23a31c3e9a4dd657a8ca12ffec17662ac630ff75cd5c42e30e5c1" exitCode=0 Jan 30 10:35:06 crc kubenswrapper[4984]: I0130 10:35:06.163838 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" event={"ID":"f3033afa-9ac2-4f32-a02d-372dcdbeb984","Type":"ContainerDied","Data":"37aa86f993c23a31c3e9a4dd657a8ca12ffec17662ac630ff75cd5c42e30e5c1"} Jan 30 10:35:07 crc kubenswrapper[4984]: I0130 10:35:07.177570 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" event={"ID":"f3033afa-9ac2-4f32-a02d-372dcdbeb984","Type":"ContainerStarted","Data":"42d0aecaafdd4bb804a7799dc5cd1267538271cfd08f0710c4b8d707a7fb9848"} Jan 30 10:35:07 crc kubenswrapper[4984]: I0130 10:35:07.178366 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:07 crc kubenswrapper[4984]: I0130 10:35:07.213861 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" podStartSLOduration=3.21384384 podStartE2EDuration="3.21384384s" podCreationTimestamp="2026-01-30 10:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:35:07.200774822 +0000 UTC m=+1411.767078656" watchObservedRunningTime="2026-01-30 10:35:07.21384384 +0000 UTC m=+1411.780147654" Jan 30 10:35:08 crc kubenswrapper[4984]: I0130 10:35:08.631093 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:08 crc kubenswrapper[4984]: I0130 10:35:08.691999 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:09 crc kubenswrapper[4984]: I0130 10:35:09.432573 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.201425 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nc5p7" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" containerID="cri-o://c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" gracePeriod=2 Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.641355 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.771349 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") pod \"931ec9af-3161-478a-9f45-556b11457731\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.771513 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") pod \"931ec9af-3161-478a-9f45-556b11457731\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.771611 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") pod \"931ec9af-3161-478a-9f45-556b11457731\" (UID: \"931ec9af-3161-478a-9f45-556b11457731\") " Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.772322 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities" (OuterVolumeSpecName: "utilities") pod "931ec9af-3161-478a-9f45-556b11457731" (UID: "931ec9af-3161-478a-9f45-556b11457731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.777172 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v" (OuterVolumeSpecName: "kube-api-access-zfq5v") pod "931ec9af-3161-478a-9f45-556b11457731" (UID: "931ec9af-3161-478a-9f45-556b11457731"). InnerVolumeSpecName "kube-api-access-zfq5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.874757 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfq5v\" (UniqueName: \"kubernetes.io/projected/931ec9af-3161-478a-9f45-556b11457731-kube-api-access-zfq5v\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.874846 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.906460 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "931ec9af-3161-478a-9f45-556b11457731" (UID: "931ec9af-3161-478a-9f45-556b11457731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:35:10 crc kubenswrapper[4984]: I0130 10:35:10.975790 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931ec9af-3161-478a-9f45-556b11457731-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210403 4984 generic.go:334] "Generic (PLEG): container finished" podID="931ec9af-3161-478a-9f45-556b11457731" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" exitCode=0 Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210487 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc5p7" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210512 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05"} Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210846 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc5p7" event={"ID":"931ec9af-3161-478a-9f45-556b11457731","Type":"ContainerDied","Data":"af28661a5979a3c9094f4b828557ffcee62d82e99faaa0eff4776f3226c108b1"} Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.210864 4984 scope.go:117] "RemoveContainer" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.237996 4984 scope.go:117] "RemoveContainer" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.244873 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.254465 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nc5p7"] Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.280695 4984 scope.go:117] "RemoveContainer" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.299502 4984 scope.go:117] "RemoveContainer" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" Jan 30 10:35:11 crc kubenswrapper[4984]: E0130 10:35:11.299914 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05\": container with ID starting with c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05 not found: ID does not exist" containerID="c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.299944 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05"} err="failed to get container status \"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05\": rpc error: code = NotFound desc = could not find container \"c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05\": container with ID starting with c9a1da5623bb428033230434221c5a853c89c5e5652287f2bef8b1262dec2c05 not found: ID does not exist" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.299964 4984 scope.go:117] "RemoveContainer" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" Jan 30 10:35:11 crc kubenswrapper[4984]: E0130 10:35:11.300394 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355\": container with ID starting with 2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355 not found: ID does not exist" containerID="2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.300434 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355"} err="failed to get container status \"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355\": rpc error: code = NotFound desc = could not find container \"2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355\": container with ID starting with 2dbdb48384ac8b12b3c49ec0e14c75208bc496838ee8e8fc80023a5d726dd355 not found: ID does not exist" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.300475 4984 scope.go:117] "RemoveContainer" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" Jan 30 10:35:11 crc kubenswrapper[4984]: E0130 10:35:11.300812 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292\": container with ID starting with 0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292 not found: ID does not exist" containerID="0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292" Jan 30 10:35:11 crc kubenswrapper[4984]: I0130 10:35:11.300837 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292"} err="failed to get container status \"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292\": rpc error: code = NotFound desc = could not find container \"0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292\": container with ID starting with 0ce7b281cdc0925862fabe5b0be9c12cfd1c272076cf10fdf5ddf4917f2a3292 not found: ID does not exist" Jan 30 10:35:12 crc kubenswrapper[4984]: I0130 10:35:12.110316 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931ec9af-3161-478a-9f45-556b11457731" path="/var/lib/kubelet/pods/931ec9af-3161-478a-9f45-556b11457731/volumes" Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.391796 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-fvwt9" Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.458654 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.459274 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" containerID="cri-o://f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" gracePeriod=10 Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.940241 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.947887 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948331 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948485 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948576 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.948821 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") pod \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\" (UID: \"a2811735-b4c5-4d3a-9b00-4eca7a41aef5\") " Jan 30 10:35:14 crc kubenswrapper[4984]: I0130 10:35:14.954493 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc" (OuterVolumeSpecName: "kube-api-access-jcvtc") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "kube-api-access-jcvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.032806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.051693 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.051728 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvtc\" (UniqueName: \"kubernetes.io/projected/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-kube-api-access-jcvtc\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.061740 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.062817 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config" (OuterVolumeSpecName: "config") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.069890 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.073923 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.086181 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2811735-b4c5-4d3a-9b00-4eca7a41aef5" (UID: "a2811735-b4c5-4d3a-9b00-4eca7a41aef5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153581 4984 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153634 4984 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-config\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153646 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153656 4984 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.153668 4984 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2811735-b4c5-4d3a-9b00-4eca7a41aef5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260722 4984 generic.go:334] "Generic (PLEG): container finished" podID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" exitCode=0 Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260771 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerDied","Data":"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b"} Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260787 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260807 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-x6tx8" event={"ID":"a2811735-b4c5-4d3a-9b00-4eca7a41aef5","Type":"ContainerDied","Data":"0046ce7289b143c1afa94a4ee5518f2e0cbb8f236b2ed7fb1318f74a0dfbd833"} Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.260826 4984 scope.go:117] "RemoveContainer" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.284266 4984 scope.go:117] "RemoveContainer" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.304707 4984 scope.go:117] "RemoveContainer" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" Jan 30 10:35:15 crc kubenswrapper[4984]: E0130 10:35:15.305164 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b\": container with ID starting with f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b not found: ID does not exist" containerID="f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.305203 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b"} err="failed to get container status \"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b\": rpc error: code = NotFound desc = could not find container \"f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b\": container with ID starting with f2e819ef9979e94b9f7f62a7b168a18aea87b44ceb9093130f3dcbeddc29ee3b not found: ID does not exist" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.305224 4984 scope.go:117] "RemoveContainer" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" Jan 30 10:35:15 crc kubenswrapper[4984]: E0130 10:35:15.305638 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06\": container with ID starting with d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06 not found: ID does not exist" containerID="d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.305666 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06"} err="failed to get container status \"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06\": rpc error: code = NotFound desc = could not find container \"d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06\": container with ID starting with d6af2087535c0a28f9ab3439f91d3161d28d4f69a64a7521a29537e65b7cfd06 not found: ID does not exist" Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.314423 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:35:15 crc kubenswrapper[4984]: I0130 10:35:15.322421 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-x6tx8"] Jan 30 10:35:16 crc kubenswrapper[4984]: I0130 10:35:16.101534 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" path="/var/lib/kubelet/pods/a2811735-b4c5-4d3a-9b00-4eca7a41aef5/volumes" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.462918 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn"] Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.463988 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464005 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464029 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464037 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464050 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-utilities" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464058 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-utilities" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464074 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-content" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464084 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="extract-content" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464102 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464111 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464125 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464133 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: E0130 10:35:27.464157 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464166 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="init" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464401 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2811735-b4c5-4d3a-9b00-4eca7a41aef5" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464429 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b210b6-b9ff-41fd-b06b-77aca8956fb6" containerName="dnsmasq-dns" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.464444 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="931ec9af-3161-478a-9f45-556b11457731" containerName="registry-server" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.465163 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.468782 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.468891 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.469818 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.473780 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.489201 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn"] Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651453 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651601 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.651981 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.753906 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.753986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.754040 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.754104 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.762962 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.762996 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.763447 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.773462 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:27 crc kubenswrapper[4984]: I0130 10:35:27.834955 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.013105 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn"] Jan 30 10:35:30 crc kubenswrapper[4984]: W0130 10:35:30.069189 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1985e15d_70be_4079_bd48_55c782dfcba7.slice/crio-86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397 WatchSource:0}: Error finding container 86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397: Status 404 returned error can't find the container with id 86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397 Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.428618 4984 generic.go:334] "Generic (PLEG): container finished" podID="92837592-8d1a-4eec-9c06-1d906b4724c2" containerID="61487bfc0e3e51e3e465639e925a3638c30e7ede1c3eb153d4f8715997633943" exitCode=0 Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.428712 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerDied","Data":"61487bfc0e3e51e3e465639e925a3638c30e7ede1c3eb153d4f8715997633943"} Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.431687 4984 generic.go:334] "Generic (PLEG): container finished" podID="137801a7-4625-4c4c-a855-8ecdf65e509a" containerID="33a1c0ab286308635ce2997101d493bc5bbaf8b33a44aa8dba473c0678633e74" exitCode=0 Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.431759 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerDied","Data":"33a1c0ab286308635ce2997101d493bc5bbaf8b33a44aa8dba473c0678633e74"} Jan 30 10:35:30 crc kubenswrapper[4984]: I0130 10:35:30.435552 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerStarted","Data":"86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397"} Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.454109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"92837592-8d1a-4eec-9c06-1d906b4724c2","Type":"ContainerStarted","Data":"accaeb30930aec0aaf9006f5c8caa26e36b661b3c76d850da365db7a9c9e871a"} Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.454630 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.457457 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"137801a7-4625-4c4c-a855-8ecdf65e509a","Type":"ContainerStarted","Data":"69628bb286ef87e652ac2a788d656591ea56a3d42903290ccd45df4bcf0ac19b"} Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.457676 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.500782 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.500759112 podStartE2EDuration="40.500759112s" podCreationTimestamp="2026-01-30 10:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:35:31.482561577 +0000 UTC m=+1436.048865431" watchObservedRunningTime="2026-01-30 10:35:31.500759112 +0000 UTC m=+1436.067062956" Jan 30 10:35:31 crc kubenswrapper[4984]: I0130 10:35:31.517823 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.517805657 podStartE2EDuration="40.517805657s" podCreationTimestamp="2026-01-30 10:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:35:31.514313014 +0000 UTC m=+1436.080616868" watchObservedRunningTime="2026-01-30 10:35:31.517805657 +0000 UTC m=+1436.084109481" Jan 30 10:35:40 crc kubenswrapper[4984]: I0130 10:35:40.532361 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:35:41 crc kubenswrapper[4984]: I0130 10:35:41.584031 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerStarted","Data":"7684994114ebf07034b9de0091e7ecb9aa3b24abaad446ba483247afd6df4c20"} Jan 30 10:35:41 crc kubenswrapper[4984]: I0130 10:35:41.646717 4984 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="137801a7-4625-4c4c-a855-8ecdf65e509a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.210:5671: connect: connection refused" Jan 30 10:35:42 crc kubenswrapper[4984]: I0130 10:35:42.311485 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 10:35:42 crc kubenswrapper[4984]: I0130 10:35:42.356217 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" podStartSLOduration=4.900454605 podStartE2EDuration="15.356162775s" podCreationTimestamp="2026-01-30 10:35:27 +0000 UTC" firstStartedPulling="2026-01-30 10:35:30.074016649 +0000 UTC m=+1434.640320483" lastFinishedPulling="2026-01-30 10:35:40.529724819 +0000 UTC m=+1445.096028653" observedRunningTime="2026-01-30 10:35:41.600779883 +0000 UTC m=+1446.167083757" watchObservedRunningTime="2026-01-30 10:35:42.356162775 +0000 UTC m=+1446.922466639" Jan 30 10:35:51 crc kubenswrapper[4984]: I0130 10:35:51.647622 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 10:35:52 crc kubenswrapper[4984]: I0130 10:35:52.728161 4984 generic.go:334] "Generic (PLEG): container finished" podID="1985e15d-70be-4079-bd48-55c782dfcba7" containerID="7684994114ebf07034b9de0091e7ecb9aa3b24abaad446ba483247afd6df4c20" exitCode=0 Jan 30 10:35:52 crc kubenswrapper[4984]: I0130 10:35:52.728219 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerDied","Data":"7684994114ebf07034b9de0091e7ecb9aa3b24abaad446ba483247afd6df4c20"} Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.141755 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.225939 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.226163 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.226311 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.226385 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") pod \"1985e15d-70be-4079-bd48-55c782dfcba7\" (UID: \"1985e15d-70be-4079-bd48-55c782dfcba7\") " Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.231760 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.232649 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds" (OuterVolumeSpecName: "kube-api-access-gqkds") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "kube-api-access-gqkds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.254301 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory" (OuterVolumeSpecName: "inventory") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.259805 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1985e15d-70be-4079-bd48-55c782dfcba7" (UID: "1985e15d-70be-4079-bd48-55c782dfcba7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328855 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkds\" (UniqueName: \"kubernetes.io/projected/1985e15d-70be-4079-bd48-55c782dfcba7-kube-api-access-gqkds\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328899 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328913 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.328924 4984 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1985e15d-70be-4079-bd48-55c782dfcba7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.754666 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" event={"ID":"1985e15d-70be-4079-bd48-55c782dfcba7","Type":"ContainerDied","Data":"86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397"} Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.754727 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bb2c5da99da9974228d0b976dd63fa6170f6596e1d490f0675842c9b050397" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.754725 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.949927 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4"] Jan 30 10:35:54 crc kubenswrapper[4984]: E0130 10:35:54.953840 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1985e15d-70be-4079-bd48-55c782dfcba7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.953883 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1985e15d-70be-4079-bd48-55c782dfcba7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.957218 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1985e15d-70be-4079-bd48-55c782dfcba7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.959850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.967520 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.968379 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.968774 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:35:54 crc kubenswrapper[4984]: I0130 10:35:54.969073 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.009689 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4"] Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.043509 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.043671 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.043782 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.145323 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.145612 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.145804 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.152881 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.154020 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.165057 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pgdm4\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.301731 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:35:55 crc kubenswrapper[4984]: I0130 10:35:55.900032 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4"] Jan 30 10:35:56 crc kubenswrapper[4984]: I0130 10:35:56.775926 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerStarted","Data":"43d6f20b7d94869a049bfbbe2b3900cd352cb9cfb4814db14cc3a9ceccbc6f5d"} Jan 30 10:35:56 crc kubenswrapper[4984]: I0130 10:35:56.776565 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerStarted","Data":"ca21821b934d788c6c0b3f7fcbb2572c6288372c82bc0fef0f5bda691acc9c68"} Jan 30 10:35:56 crc kubenswrapper[4984]: I0130 10:35:56.800672 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" podStartSLOduration=2.384750414 podStartE2EDuration="2.80065514s" podCreationTimestamp="2026-01-30 10:35:54 +0000 UTC" firstStartedPulling="2026-01-30 10:35:55.902698124 +0000 UTC m=+1460.469001988" lastFinishedPulling="2026-01-30 10:35:56.31860288 +0000 UTC m=+1460.884906714" observedRunningTime="2026-01-30 10:35:56.793422627 +0000 UTC m=+1461.359726451" watchObservedRunningTime="2026-01-30 10:35:56.80065514 +0000 UTC m=+1461.366958964" Jan 30 10:35:59 crc kubenswrapper[4984]: I0130 10:35:59.819614 4984 generic.go:334] "Generic (PLEG): container finished" podID="049a948c-1945-4217-b728-7f39570dd740" containerID="43d6f20b7d94869a049bfbbe2b3900cd352cb9cfb4814db14cc3a9ceccbc6f5d" exitCode=0 Jan 30 10:35:59 crc kubenswrapper[4984]: I0130 10:35:59.819711 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerDied","Data":"43d6f20b7d94869a049bfbbe2b3900cd352cb9cfb4814db14cc3a9ceccbc6f5d"} Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.253987 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.378371 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") pod \"049a948c-1945-4217-b728-7f39570dd740\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.378698 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") pod \"049a948c-1945-4217-b728-7f39570dd740\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.378759 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") pod \"049a948c-1945-4217-b728-7f39570dd740\" (UID: \"049a948c-1945-4217-b728-7f39570dd740\") " Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.384881 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq" (OuterVolumeSpecName: "kube-api-access-d4jcq") pod "049a948c-1945-4217-b728-7f39570dd740" (UID: "049a948c-1945-4217-b728-7f39570dd740"). InnerVolumeSpecName "kube-api-access-d4jcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.404013 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory" (OuterVolumeSpecName: "inventory") pod "049a948c-1945-4217-b728-7f39570dd740" (UID: "049a948c-1945-4217-b728-7f39570dd740"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.413945 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "049a948c-1945-4217-b728-7f39570dd740" (UID: "049a948c-1945-4217-b728-7f39570dd740"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.480944 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.480999 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/049a948c-1945-4217-b728-7f39570dd740-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.481013 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jcq\" (UniqueName: \"kubernetes.io/projected/049a948c-1945-4217-b728-7f39570dd740-kube-api-access-d4jcq\") on node \"crc\" DevicePath \"\"" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.853965 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" event={"ID":"049a948c-1945-4217-b728-7f39570dd740","Type":"ContainerDied","Data":"ca21821b934d788c6c0b3f7fcbb2572c6288372c82bc0fef0f5bda691acc9c68"} Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.854590 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca21821b934d788c6c0b3f7fcbb2572c6288372c82bc0fef0f5bda691acc9c68" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.854032 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pgdm4" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.937879 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd"] Jan 30 10:36:01 crc kubenswrapper[4984]: E0130 10:36:01.938479 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049a948c-1945-4217-b728-7f39570dd740" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.938510 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a948c-1945-4217-b728-7f39570dd740" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.938913 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="049a948c-1945-4217-b728-7f39570dd740" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.939984 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943077 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943520 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943776 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.943869 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:36:01 crc kubenswrapper[4984]: I0130 10:36:01.965814 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd"] Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.091770 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.091864 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.092021 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.092101 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193651 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193709 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193819 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.193857 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.203025 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.204855 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.226667 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.227152 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.268080 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.821210 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd"] Jan 30 10:36:02 crc kubenswrapper[4984]: I0130 10:36:02.867912 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerStarted","Data":"01095e6c934c70dece32c6771158b8523cd2829d8ac02b3a74bf3162a5e9cb66"} Jan 30 10:36:04 crc kubenswrapper[4984]: I0130 10:36:04.892589 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerStarted","Data":"0b66e49635fc5069d1019f6c1c3bef672bc077a1480acff08d68c7fbea15f904"} Jan 30 10:36:04 crc kubenswrapper[4984]: I0130 10:36:04.922326 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" podStartSLOduration=3.17374668 podStartE2EDuration="3.922229729s" podCreationTimestamp="2026-01-30 10:36:01 +0000 UTC" firstStartedPulling="2026-01-30 10:36:02.829625382 +0000 UTC m=+1467.395929226" lastFinishedPulling="2026-01-30 10:36:03.578108441 +0000 UTC m=+1468.144412275" observedRunningTime="2026-01-30 10:36:04.910412154 +0000 UTC m=+1469.476715978" watchObservedRunningTime="2026-01-30 10:36:04.922229729 +0000 UTC m=+1469.488533583" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.674087 4984 scope.go:117] "RemoveContainer" containerID="0789f4290dbcaeca5700757294aca052563ba0644765c2738bb82c817de460e2" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.717854 4984 scope.go:117] "RemoveContainer" containerID="f92bcc7f529c6d27eac4218b5f51170e776604565fbe8022a9769f8c3f32b9e1" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.762943 4984 scope.go:117] "RemoveContainer" containerID="ebbbac3df4d2b2a3bcd4123943001f6db476332543301ff3d54dc3650c9da9b0" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.819942 4984 scope.go:117] "RemoveContainer" containerID="f04515d06093bea0006457a33fcd2dff143369d8a73d4cfd520b13fb1b93624f" Jan 30 10:36:20 crc kubenswrapper[4984]: I0130 10:36:20.843538 4984 scope.go:117] "RemoveContainer" containerID="9ee1ed553aa82f1b58b8003898aacd5f65036545ee6400dc5b131af172873423" Jan 30 10:36:33 crc kubenswrapper[4984]: I0130 10:36:33.001404 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:36:33 crc kubenswrapper[4984]: I0130 10:36:33.002460 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:37:03 crc kubenswrapper[4984]: I0130 10:37:03.001001 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:37:03 crc kubenswrapper[4984]: I0130 10:37:03.001648 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:37:20 crc kubenswrapper[4984]: I0130 10:37:20.979143 4984 scope.go:117] "RemoveContainer" containerID="82eaf4a70aa0bb3862fc793e3843dbf8a715aef600755d897602de67f43a4990" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.021614 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.024646 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.037735 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.050409 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.050463 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.050569 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.152366 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.152556 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.152588 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.153000 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.153140 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.178950 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"redhat-marketplace-skxsz\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.368046 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:23 crc kubenswrapper[4984]: I0130 10:37:23.909628 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:24 crc kubenswrapper[4984]: I0130 10:37:24.812582 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf"} Jan 30 10:37:24 crc kubenswrapper[4984]: I0130 10:37:24.812327 4984 generic.go:334] "Generic (PLEG): container finished" podID="e7389f74-bc2d-4232-921b-527c824b7753" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" exitCode=0 Jan 30 10:37:24 crc kubenswrapper[4984]: I0130 10:37:24.814418 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerStarted","Data":"cf0415faef1337e90f74baefa9777e03b755fa05e81ba321a56d6e1ded44938a"} Jan 30 10:37:25 crc kubenswrapper[4984]: I0130 10:37:25.826028 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerStarted","Data":"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35"} Jan 30 10:37:26 crc kubenswrapper[4984]: I0130 10:37:26.887870 4984 generic.go:334] "Generic (PLEG): container finished" podID="e7389f74-bc2d-4232-921b-527c824b7753" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" exitCode=0 Jan 30 10:37:26 crc kubenswrapper[4984]: I0130 10:37:26.887957 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35"} Jan 30 10:37:27 crc kubenswrapper[4984]: I0130 10:37:27.898598 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerStarted","Data":"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d"} Jan 30 10:37:27 crc kubenswrapper[4984]: I0130 10:37:27.928723 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-skxsz" podStartSLOduration=3.44156766 podStartE2EDuration="5.928703442s" podCreationTimestamp="2026-01-30 10:37:22 +0000 UTC" firstStartedPulling="2026-01-30 10:37:24.815210179 +0000 UTC m=+1549.381514033" lastFinishedPulling="2026-01-30 10:37:27.302345951 +0000 UTC m=+1551.868649815" observedRunningTime="2026-01-30 10:37:27.925813774 +0000 UTC m=+1552.492117598" watchObservedRunningTime="2026-01-30 10:37:27.928703442 +0000 UTC m=+1552.495007266" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.000649 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.001375 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.001430 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.003020 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.003101 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" gracePeriod=600 Jan 30 10:37:33 crc kubenswrapper[4984]: E0130 10:37:33.144234 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.368766 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.369002 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.443091 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.967386 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" exitCode=0 Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.967458 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a"} Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.967526 4984 scope.go:117] "RemoveContainer" containerID="43453d0c25d6e9a5481a338fdd36fdf08a13276f81a1062cc1900dca47fa17b8" Jan 30 10:37:33 crc kubenswrapper[4984]: I0130 10:37:33.968520 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:37:33 crc kubenswrapper[4984]: E0130 10:37:33.968930 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:37:34 crc kubenswrapper[4984]: I0130 10:37:34.055521 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:34 crc kubenswrapper[4984]: I0130 10:37:34.106227 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.007051 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-skxsz" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" containerID="cri-o://f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" gracePeriod=2 Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.518773 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.689935 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") pod \"e7389f74-bc2d-4232-921b-527c824b7753\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.690107 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") pod \"e7389f74-bc2d-4232-921b-527c824b7753\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.690183 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") pod \"e7389f74-bc2d-4232-921b-527c824b7753\" (UID: \"e7389f74-bc2d-4232-921b-527c824b7753\") " Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.691921 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities" (OuterVolumeSpecName: "utilities") pod "e7389f74-bc2d-4232-921b-527c824b7753" (UID: "e7389f74-bc2d-4232-921b-527c824b7753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.697824 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5" (OuterVolumeSpecName: "kube-api-access-qnsd5") pod "e7389f74-bc2d-4232-921b-527c824b7753" (UID: "e7389f74-bc2d-4232-921b-527c824b7753"). InnerVolumeSpecName "kube-api-access-qnsd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.715831 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7389f74-bc2d-4232-921b-527c824b7753" (UID: "e7389f74-bc2d-4232-921b-527c824b7753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.792622 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.792667 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7389f74-bc2d-4232-921b-527c824b7753-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:37:36 crc kubenswrapper[4984]: I0130 10:37:36.792677 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnsd5\" (UniqueName: \"kubernetes.io/projected/e7389f74-bc2d-4232-921b-527c824b7753-kube-api-access-qnsd5\") on node \"crc\" DevicePath \"\"" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016067 4984 generic.go:334] "Generic (PLEG): container finished" podID="e7389f74-bc2d-4232-921b-527c824b7753" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" exitCode=0 Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016124 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d"} Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016455 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-skxsz" event={"ID":"e7389f74-bc2d-4232-921b-527c824b7753","Type":"ContainerDied","Data":"cf0415faef1337e90f74baefa9777e03b755fa05e81ba321a56d6e1ded44938a"} Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016134 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-skxsz" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.016475 4984 scope.go:117] "RemoveContainer" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.035216 4984 scope.go:117] "RemoveContainer" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.056964 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.059974 4984 scope.go:117] "RemoveContainer" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.067312 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-skxsz"] Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.128165 4984 scope.go:117] "RemoveContainer" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" Jan 30 10:37:37 crc kubenswrapper[4984]: E0130 10:37:37.128673 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d\": container with ID starting with f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d not found: ID does not exist" containerID="f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.128713 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d"} err="failed to get container status \"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d\": rpc error: code = NotFound desc = could not find container \"f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d\": container with ID starting with f7d9ed987084909362bfe2b1730788e785c5c6a8282d09b47d698923516f6f2d not found: ID does not exist" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.128767 4984 scope.go:117] "RemoveContainer" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" Jan 30 10:37:37 crc kubenswrapper[4984]: E0130 10:37:37.129159 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35\": container with ID starting with 94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35 not found: ID does not exist" containerID="94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.129200 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35"} err="failed to get container status \"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35\": rpc error: code = NotFound desc = could not find container \"94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35\": container with ID starting with 94f7d98886b19953107530946aba5e10a97e54f3afe54362f772a60b95f92f35 not found: ID does not exist" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.129227 4984 scope.go:117] "RemoveContainer" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" Jan 30 10:37:37 crc kubenswrapper[4984]: E0130 10:37:37.129608 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf\": container with ID starting with fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf not found: ID does not exist" containerID="fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf" Jan 30 10:37:37 crc kubenswrapper[4984]: I0130 10:37:37.129666 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf"} err="failed to get container status \"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf\": rpc error: code = NotFound desc = could not find container \"fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf\": container with ID starting with fa4aa9bb97a2f8e70b5b3a7b10bcd49403ca756a90ddaed14904ca78ad4c61cf not found: ID does not exist" Jan 30 10:37:38 crc kubenswrapper[4984]: I0130 10:37:38.108823 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7389f74-bc2d-4232-921b-527c824b7753" path="/var/lib/kubelet/pods/e7389f74-bc2d-4232-921b-527c824b7753/volumes" Jan 30 10:37:48 crc kubenswrapper[4984]: I0130 10:37:48.091124 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:37:48 crc kubenswrapper[4984]: E0130 10:37:48.092030 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:37:59 crc kubenswrapper[4984]: I0130 10:37:59.090455 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:37:59 crc kubenswrapper[4984]: E0130 10:37:59.091385 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:11 crc kubenswrapper[4984]: I0130 10:38:11.091066 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:11 crc kubenswrapper[4984]: E0130 10:38:11.092804 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.069651 4984 scope.go:117] "RemoveContainer" containerID="96b3846288cceafda1ee7274b76693df892ab4500d4aff35fab684512216cc00" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.113591 4984 scope.go:117] "RemoveContainer" containerID="3613011bb5abdf53d835f3cef6db40eb5860197a58fe6745090b8ffebbf09eca" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.150402 4984 scope.go:117] "RemoveContainer" containerID="9d35c38a5551baaf7ed4a5b4d69f59f5843939592b70c610ddbe87a91a00af4b" Jan 30 10:38:21 crc kubenswrapper[4984]: I0130 10:38:21.180373 4984 scope.go:117] "RemoveContainer" containerID="3aad19b6125845667d072cf0f08ee46a226fcd6fead4729460c4f88d31231631" Jan 30 10:38:24 crc kubenswrapper[4984]: I0130 10:38:24.089941 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:24 crc kubenswrapper[4984]: E0130 10:38:24.090211 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:35 crc kubenswrapper[4984]: I0130 10:38:35.090393 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:35 crc kubenswrapper[4984]: E0130 10:38:35.091415 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:38:50 crc kubenswrapper[4984]: I0130 10:38:50.090557 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:38:50 crc kubenswrapper[4984]: E0130 10:38:50.091590 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:03 crc kubenswrapper[4984]: I0130 10:39:03.090695 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:03 crc kubenswrapper[4984]: E0130 10:39:03.091520 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:07 crc kubenswrapper[4984]: I0130 10:39:07.979180 4984 generic.go:334] "Generic (PLEG): container finished" podID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerID="0b66e49635fc5069d1019f6c1c3bef672bc077a1480acff08d68c7fbea15f904" exitCode=0 Jan 30 10:39:07 crc kubenswrapper[4984]: I0130 10:39:07.979290 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerDied","Data":"0b66e49635fc5069d1019f6c1c3bef672bc077a1480acff08d68c7fbea15f904"} Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.496294 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.627773 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.627888 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.628026 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.628148 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") pod \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\" (UID: \"ba20d4a0-7acc-4813-8fa9-6f166802bd04\") " Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.633555 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.634033 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2" (OuterVolumeSpecName: "kube-api-access-2xfl2") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "kube-api-access-2xfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.659592 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory" (OuterVolumeSpecName: "inventory") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.660268 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba20d4a0-7acc-4813-8fa9-6f166802bd04" (UID: "ba20d4a0-7acc-4813-8fa9-6f166802bd04"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730678 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730720 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xfl2\" (UniqueName: \"kubernetes.io/projected/ba20d4a0-7acc-4813-8fa9-6f166802bd04-kube-api-access-2xfl2\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730731 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.730741 4984 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba20d4a0-7acc-4813-8fa9-6f166802bd04-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.999290 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" event={"ID":"ba20d4a0-7acc-4813-8fa9-6f166802bd04","Type":"ContainerDied","Data":"01095e6c934c70dece32c6771158b8523cd2829d8ac02b3a74bf3162a5e9cb66"} Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.999324 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd" Jan 30 10:39:09 crc kubenswrapper[4984]: I0130 10:39:09.999336 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01095e6c934c70dece32c6771158b8523cd2829d8ac02b3a74bf3162a5e9cb66" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.106898 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7"] Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107407 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-utilities" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107433 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-utilities" Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107452 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107465 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107495 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-content" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107504 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="extract-content" Jan 30 10:39:10 crc kubenswrapper[4984]: E0130 10:39:10.107528 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107535 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107757 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba20d4a0-7acc-4813-8fa9-6f166802bd04" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.107783 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7389f74-bc2d-4232-921b-527c824b7753" containerName="registry-server" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.108559 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.111363 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.111465 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.111718 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.112166 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.116505 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7"] Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.242859 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.242942 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.244141 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.346503 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.346655 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.347450 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.354018 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.357204 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.367609 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zfts7\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:10 crc kubenswrapper[4984]: I0130 10:39:10.431072 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:39:11 crc kubenswrapper[4984]: I0130 10:39:11.035584 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7"] Jan 30 10:39:11 crc kubenswrapper[4984]: W0130 10:39:11.040430 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8414dabf_1fa1_4a4c_8db5_55ef7397164d.slice/crio-fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62 WatchSource:0}: Error finding container fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62: Status 404 returned error can't find the container with id fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62 Jan 30 10:39:11 crc kubenswrapper[4984]: I0130 10:39:11.044964 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:39:12 crc kubenswrapper[4984]: I0130 10:39:12.017886 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerStarted","Data":"82ecbb803ee39720407c4c93639d911fbdd5cca24aaa0b709401e13cd1f3ac74"} Jan 30 10:39:12 crc kubenswrapper[4984]: I0130 10:39:12.018412 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerStarted","Data":"fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62"} Jan 30 10:39:12 crc kubenswrapper[4984]: I0130 10:39:12.039974 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" podStartSLOduration=1.663479111 podStartE2EDuration="2.039951623s" podCreationTimestamp="2026-01-30 10:39:10 +0000 UTC" firstStartedPulling="2026-01-30 10:39:11.04467193 +0000 UTC m=+1655.610975764" lastFinishedPulling="2026-01-30 10:39:11.421144442 +0000 UTC m=+1655.987448276" observedRunningTime="2026-01-30 10:39:12.0335786 +0000 UTC m=+1656.599882424" watchObservedRunningTime="2026-01-30 10:39:12.039951623 +0000 UTC m=+1656.606255447" Jan 30 10:39:14 crc kubenswrapper[4984]: I0130 10:39:14.090905 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:14 crc kubenswrapper[4984]: E0130 10:39:14.091436 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.267896 4984 scope.go:117] "RemoveContainer" containerID="5cf3e1bb50c8c1bf2e5081a334fdeec215cc743a67e0e0099a51529974de06f6" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.309069 4984 scope.go:117] "RemoveContainer" containerID="6516cd82f504071d734a568a8ad9702281f933505556b87c719fec533654c9eb" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.346890 4984 scope.go:117] "RemoveContainer" containerID="295d59b3447932ddb067dcb31614a2834d9d8b9cdb5ddc06d017993af6e8fff0" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.380142 4984 scope.go:117] "RemoveContainer" containerID="26cedc8d39143068c7af7fbe31b7529182b7c3711cab842800ebfe172989d20f" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.411173 4984 scope.go:117] "RemoveContainer" containerID="e0480b6eea3ae9535888a948520356ded5fd055e3feb9c1b4037e862f8a5db4f" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.438421 4984 scope.go:117] "RemoveContainer" containerID="d4bc2a2f7b160f6def950bc8159a520d7a0931eff8224309d407450944a4f179" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.455797 4984 scope.go:117] "RemoveContainer" containerID="3761fc33c106fcf19391d308eeaebf6562714966cd527dac72e6d2ff4f5555af" Jan 30 10:39:21 crc kubenswrapper[4984]: I0130 10:39:21.479896 4984 scope.go:117] "RemoveContainer" containerID="33c4d531ffa35085fcab78b3f5565006c2a350b3d8544612587406b39f3ec0ce" Jan 30 10:39:29 crc kubenswrapper[4984]: I0130 10:39:29.091532 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:29 crc kubenswrapper[4984]: E0130 10:39:29.092904 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.054919 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.066236 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.078760 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jjssn"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.089176 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6746-account-create-update-clg9v"] Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.102995 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd118357-c4bf-43ef-a738-9fcd6b07aac4" path="/var/lib/kubelet/pods/dd118357-c4bf-43ef-a738-9fcd6b07aac4/volumes" Jan 30 10:39:30 crc kubenswrapper[4984]: I0130 10:39:30.103840 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83eb734-fae0-40ac-85db-8f8c8fb26133" path="/var/lib/kubelet/pods/e83eb734-fae0-40ac-85db-8f8c8fb26133/volumes" Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.033649 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.044369 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.053134 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4q2ws"] Jan 30 10:39:31 crc kubenswrapper[4984]: I0130 10:39:31.060554 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f26c-account-create-update-7p7pm"] Jan 30 10:39:32 crc kubenswrapper[4984]: I0130 10:39:32.102051 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd7bd77-9e19-4ad1-9711-e0290f74afa8" path="/var/lib/kubelet/pods/0dd7bd77-9e19-4ad1-9711-e0290f74afa8/volumes" Jan 30 10:39:32 crc kubenswrapper[4984]: I0130 10:39:32.102673 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c89dde7-c492-44dd-b36c-571540039b30" path="/var/lib/kubelet/pods/4c89dde7-c492-44dd-b36c-571540039b30/volumes" Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.042902 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.060529 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.069019 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mwcqt"] Jan 30 10:39:37 crc kubenswrapper[4984]: I0130 10:39:37.076053 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-064a-account-create-update-8lxkv"] Jan 30 10:39:38 crc kubenswrapper[4984]: I0130 10:39:38.108168 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c0dd46-b897-468f-87a0-a335dd8fd6d5" path="/var/lib/kubelet/pods/83c0dd46-b897-468f-87a0-a335dd8fd6d5/volumes" Jan 30 10:39:38 crc kubenswrapper[4984]: I0130 10:39:38.109952 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849571b4-26bb-4853-af9c-f717967dea41" path="/var/lib/kubelet/pods/849571b4-26bb-4853-af9c-f717967dea41/volumes" Jan 30 10:39:40 crc kubenswrapper[4984]: I0130 10:39:40.090356 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:40 crc kubenswrapper[4984]: E0130 10:39:40.090747 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:54 crc kubenswrapper[4984]: I0130 10:39:54.090308 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:39:54 crc kubenswrapper[4984]: E0130 10:39:54.091112 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.043761 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.053330 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.063842 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.072303 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-p7n6d"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.080918 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-622f-account-create-update-xxrl4"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.089220 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.103666 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6c0cd3-99cd-454e-8ceb-000141c59c2b" path="/var/lib/kubelet/pods/1c6c0cd3-99cd-454e-8ceb-000141c59c2b/volumes" Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.104891 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d291ef2c-2cdb-47be-b508-efd4c8282791" path="/var/lib/kubelet/pods/d291ef2c-2cdb-47be-b508-efd4c8282791/volumes" Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.105727 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8d9e-account-create-update-pv4gq"] Jan 30 10:39:58 crc kubenswrapper[4984]: I0130 10:39:58.107567 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bhbll"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.030744 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.041295 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.051936 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-61ae-account-create-update-8l5nb"] Jan 30 10:39:59 crc kubenswrapper[4984]: I0130 10:39:59.062752 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qtwt7"] Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.105665 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26267a37-c8e7-45b3-af7f-8050a58cb697" path="/var/lib/kubelet/pods/26267a37-c8e7-45b3-af7f-8050a58cb697/volumes" Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.107966 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341b21ee-dc5c-48f9-9810-85d1af9b9de9" path="/var/lib/kubelet/pods/341b21ee-dc5c-48f9-9810-85d1af9b9de9/volumes" Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.109241 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f293b1-64af-45c3-8ee1-b8df7efdde3e" path="/var/lib/kubelet/pods/c4f293b1-64af-45c3-8ee1-b8df7efdde3e/volumes" Jan 30 10:40:00 crc kubenswrapper[4984]: I0130 10:40:00.110497 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5490b62-8700-4c9c-b4f7-517c71f91c46" path="/var/lib/kubelet/pods/f5490b62-8700-4c9c-b4f7-517c71f91c46/volumes" Jan 30 10:40:02 crc kubenswrapper[4984]: I0130 10:40:02.045558 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:40:02 crc kubenswrapper[4984]: I0130 10:40:02.054315 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-v95fj"] Jan 30 10:40:02 crc kubenswrapper[4984]: I0130 10:40:02.100091 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfce8525-20d3-4c57-9638-37a46571c375" path="/var/lib/kubelet/pods/bfce8525-20d3-4c57-9638-37a46571c375/volumes" Jan 30 10:40:06 crc kubenswrapper[4984]: I0130 10:40:06.096688 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:06 crc kubenswrapper[4984]: E0130 10:40:06.097632 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:40:18 crc kubenswrapper[4984]: I0130 10:40:18.090866 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:18 crc kubenswrapper[4984]: E0130 10:40:18.092300 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.554666 4984 scope.go:117] "RemoveContainer" containerID="990b9baffd84708013a7a3ee4aa2247425d308cfa8107b4fdee81cf4fe0b11dc" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.596550 4984 scope.go:117] "RemoveContainer" containerID="6b1aaabba9cfa8c1f6dfd85c93ed3b8a280e0a8ae2a73f1049cc58417939709f" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.674856 4984 scope.go:117] "RemoveContainer" containerID="2c715bd7c478626b0d30f0dcbe5f0fa4d9ddd3cebe540358d60fefd03ffbea4f" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.702538 4984 scope.go:117] "RemoveContainer" containerID="498b8053d9b5e3a916ccebe1006886752c4f6d7609924037fb495a51da3787d5" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.746078 4984 scope.go:117] "RemoveContainer" containerID="b2c5eedb1976c1f88ba872ebef95c16d2cb8d47db5e197de1d5f09d25aea4f90" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.796641 4984 scope.go:117] "RemoveContainer" containerID="b43c0631539e8d8618d4ae2280e84e9cef0ad9ab61a9b8d7dfd994b58ac2994b" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.841719 4984 scope.go:117] "RemoveContainer" containerID="f3470565542ea1c66598b3a5981194b216557da34b0dbe74cf09a86a91c5f978" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.887022 4984 scope.go:117] "RemoveContainer" containerID="f9f5f71df6bcff6e848630eab001a1a161d02735319888972af7604f9aa242ac" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.917220 4984 scope.go:117] "RemoveContainer" containerID="65086de31b1aa439689527681ff638af7559dadfbbbe7fd2e976641d2933b6ce" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.945238 4984 scope.go:117] "RemoveContainer" containerID="70e9112a74a7aadc96357a6c30b6f274f66b33e88559a27a17cb48d3251c7fbb" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.973120 4984 scope.go:117] "RemoveContainer" containerID="e4a188d3d377fd9a910224b46c8bfca036c469e31163b866035741aa0bc79a21" Jan 30 10:40:21 crc kubenswrapper[4984]: I0130 10:40:21.991637 4984 scope.go:117] "RemoveContainer" containerID="636c0d411532393965dbc0c85c0755158f7ef4a0555bad562fe1e96ce9c7b1be" Jan 30 10:40:22 crc kubenswrapper[4984]: I0130 10:40:22.014824 4984 scope.go:117] "RemoveContainer" containerID="4e36e53c2881a6f73654429fc80824078411a297a7acc1ff57eb163eb773e0f9" Jan 30 10:40:22 crc kubenswrapper[4984]: I0130 10:40:22.041806 4984 scope.go:117] "RemoveContainer" containerID="73182d3db897a608122b23320455311eced5f1e7bb5cd0d6aaf0f4d8d9abd5cb" Jan 30 10:40:22 crc kubenswrapper[4984]: I0130 10:40:22.066795 4984 scope.go:117] "RemoveContainer" containerID="3be32fd131009048bc81a0d4461ef13892f209f53fa5bcf3e5c232baa45cfcc2" Jan 30 10:40:30 crc kubenswrapper[4984]: I0130 10:40:30.054650 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:40:30 crc kubenswrapper[4984]: I0130 10:40:30.068893 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l7nmz"] Jan 30 10:40:30 crc kubenswrapper[4984]: I0130 10:40:30.108184 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9a5e83-0bd4-4550-a3c9-e297cc831e99" path="/var/lib/kubelet/pods/ca9a5e83-0bd4-4550-a3c9-e297cc831e99/volumes" Jan 30 10:40:32 crc kubenswrapper[4984]: I0130 10:40:32.089862 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:32 crc kubenswrapper[4984]: E0130 10:40:32.090398 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:40:36 crc kubenswrapper[4984]: I0130 10:40:36.028641 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:40:36 crc kubenswrapper[4984]: I0130 10:40:36.036821 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-whl8p"] Jan 30 10:40:36 crc kubenswrapper[4984]: I0130 10:40:36.111079 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c1d730-34f1-4912-a0e9-f19d10e9ec9b" path="/var/lib/kubelet/pods/58c1d730-34f1-4912-a0e9-f19d10e9ec9b/volumes" Jan 30 10:40:45 crc kubenswrapper[4984]: I0130 10:40:45.090632 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:40:45 crc kubenswrapper[4984]: E0130 10:40:45.091420 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:00 crc kubenswrapper[4984]: I0130 10:41:00.090307 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:00 crc kubenswrapper[4984]: E0130 10:41:00.091107 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:12 crc kubenswrapper[4984]: I0130 10:41:12.090695 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:12 crc kubenswrapper[4984]: E0130 10:41:12.091687 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:12 crc kubenswrapper[4984]: I0130 10:41:12.275956 4984 generic.go:334] "Generic (PLEG): container finished" podID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerID="82ecbb803ee39720407c4c93639d911fbdd5cca24aaa0b709401e13cd1f3ac74" exitCode=0 Jan 30 10:41:12 crc kubenswrapper[4984]: I0130 10:41:12.276004 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerDied","Data":"82ecbb803ee39720407c4c93639d911fbdd5cca24aaa0b709401e13cd1f3ac74"} Jan 30 10:41:13 crc kubenswrapper[4984]: I0130 10:41:13.808445 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.002572 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") pod \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.002682 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") pod \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.002724 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") pod \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\" (UID: \"8414dabf-1fa1-4a4c-8db5-55ef7397164d\") " Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.011059 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx" (OuterVolumeSpecName: "kube-api-access-46bkx") pod "8414dabf-1fa1-4a4c-8db5-55ef7397164d" (UID: "8414dabf-1fa1-4a4c-8db5-55ef7397164d"). InnerVolumeSpecName "kube-api-access-46bkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.031744 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8414dabf-1fa1-4a4c-8db5-55ef7397164d" (UID: "8414dabf-1fa1-4a4c-8db5-55ef7397164d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.035308 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory" (OuterVolumeSpecName: "inventory") pod "8414dabf-1fa1-4a4c-8db5-55ef7397164d" (UID: "8414dabf-1fa1-4a4c-8db5-55ef7397164d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.107080 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46bkx\" (UniqueName: \"kubernetes.io/projected/8414dabf-1fa1-4a4c-8db5-55ef7397164d-kube-api-access-46bkx\") on node \"crc\" DevicePath \"\"" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.107126 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.107138 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8414dabf-1fa1-4a4c-8db5-55ef7397164d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.309710 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" event={"ID":"8414dabf-1fa1-4a4c-8db5-55ef7397164d","Type":"ContainerDied","Data":"fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62"} Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.310116 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fccb175b4dcf49aa7d8e3e47ded95e7de1c4ed02472603728ee06938689c1a62" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.309810 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zfts7" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.433626 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg"] Jan 30 10:41:14 crc kubenswrapper[4984]: E0130 10:41:14.434223 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.434280 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.434618 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="8414dabf-1fa1-4a4c-8db5-55ef7397164d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.435637 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.439889 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.440434 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.440660 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.440911 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.453615 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg"] Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.620352 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.620987 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.621222 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.723499 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.723586 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.723622 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.730067 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.730180 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.745627 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:14 crc kubenswrapper[4984]: I0130 10:41:14.768961 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:41:15 crc kubenswrapper[4984]: I0130 10:41:15.356496 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg"] Jan 30 10:41:15 crc kubenswrapper[4984]: W0130 10:41:15.362083 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded90c997_eddb_4afb_ae0d_31dd3ef4c485.slice/crio-e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557 WatchSource:0}: Error finding container e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557: Status 404 returned error can't find the container with id e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557 Jan 30 10:41:16 crc kubenswrapper[4984]: I0130 10:41:16.329040 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerStarted","Data":"0a9f391398259516c72ece0ca377a1d28d2d067e8bd53fb4fc4fa3f92e8b395d"} Jan 30 10:41:16 crc kubenswrapper[4984]: I0130 10:41:16.329492 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerStarted","Data":"e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557"} Jan 30 10:41:16 crc kubenswrapper[4984]: I0130 10:41:16.348359 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" podStartSLOduration=1.751730427 podStartE2EDuration="2.348331376s" podCreationTimestamp="2026-01-30 10:41:14 +0000 UTC" firstStartedPulling="2026-01-30 10:41:15.365926813 +0000 UTC m=+1779.932230637" lastFinishedPulling="2026-01-30 10:41:15.962527742 +0000 UTC m=+1780.528831586" observedRunningTime="2026-01-30 10:41:16.342965211 +0000 UTC m=+1780.909269065" watchObservedRunningTime="2026-01-30 10:41:16.348331376 +0000 UTC m=+1780.914635240" Jan 30 10:41:22 crc kubenswrapper[4984]: I0130 10:41:22.307427 4984 scope.go:117] "RemoveContainer" containerID="0e27973ea9b1e09e6fd759eac37e1b5558d22ece2091da32401b555f34855ccf" Jan 30 10:41:22 crc kubenswrapper[4984]: I0130 10:41:22.359876 4984 scope.go:117] "RemoveContainer" containerID="429ee7ba89918111f347a6702ac0f612b104b982e26e880afe61da5e67302534" Jan 30 10:41:23 crc kubenswrapper[4984]: I0130 10:41:23.090897 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:23 crc kubenswrapper[4984]: E0130 10:41:23.091881 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:29 crc kubenswrapper[4984]: I0130 10:41:29.057156 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:41:29 crc kubenswrapper[4984]: I0130 10:41:29.072313 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qb89x"] Jan 30 10:41:30 crc kubenswrapper[4984]: I0130 10:41:30.110242 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ce38a2-070f-4aac-9495-d27d915c5ae1" path="/var/lib/kubelet/pods/e6ce38a2-070f-4aac-9495-d27d915c5ae1/volumes" Jan 30 10:41:34 crc kubenswrapper[4984]: I0130 10:41:34.090144 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:34 crc kubenswrapper[4984]: E0130 10:41:34.090769 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:47 crc kubenswrapper[4984]: I0130 10:41:47.090133 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:41:47 crc kubenswrapper[4984]: E0130 10:41:47.091442 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:41:56 crc kubenswrapper[4984]: I0130 10:41:56.061485 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:41:56 crc kubenswrapper[4984]: I0130 10:41:56.073287 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bfzdw"] Jan 30 10:41:56 crc kubenswrapper[4984]: I0130 10:41:56.108667 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3048d738-67a2-417f-91ca-8993f4b557f1" path="/var/lib/kubelet/pods/3048d738-67a2-417f-91ca-8993f4b557f1/volumes" Jan 30 10:41:57 crc kubenswrapper[4984]: I0130 10:41:57.036120 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:41:57 crc kubenswrapper[4984]: I0130 10:41:57.046054 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pxnz6"] Jan 30 10:41:58 crc kubenswrapper[4984]: I0130 10:41:58.109695 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1" path="/var/lib/kubelet/pods/84a8a7c2-ba1c-4c81-af6a-89fa4ea02ae1/volumes" Jan 30 10:41:59 crc kubenswrapper[4984]: I0130 10:41:59.030419 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:41:59 crc kubenswrapper[4984]: I0130 10:41:59.039992 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4q4x7"] Jan 30 10:42:00 crc kubenswrapper[4984]: I0130 10:42:00.121173 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80" path="/var/lib/kubelet/pods/67ed5fb4-b4a1-48b0-a74b-f6a03f7a0b80/volumes" Jan 30 10:42:01 crc kubenswrapper[4984]: I0130 10:42:01.091043 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:01 crc kubenswrapper[4984]: E0130 10:42:01.091728 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:42:09 crc kubenswrapper[4984]: I0130 10:42:09.056097 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:42:09 crc kubenswrapper[4984]: I0130 10:42:09.072735 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5hx59"] Jan 30 10:42:10 crc kubenswrapper[4984]: I0130 10:42:10.102464 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2405c6ec-2510-4786-a602-ae85d358ed1f" path="/var/lib/kubelet/pods/2405c6ec-2510-4786-a602-ae85d358ed1f/volumes" Jan 30 10:42:16 crc kubenswrapper[4984]: I0130 10:42:16.097175 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:16 crc kubenswrapper[4984]: E0130 10:42:16.098165 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.437539 4984 scope.go:117] "RemoveContainer" containerID="71b37a694edb5502847d9b98becba6b55ffee4b768b800a7abda8cfa9dacfecb" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.464588 4984 scope.go:117] "RemoveContainer" containerID="f262460637877d4f5daeebd4c5ff5dbc2e5b82919bca6faedbbb9bbf414ca732" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.585203 4984 scope.go:117] "RemoveContainer" containerID="ff2e43e014ee433edf02ecd3b11995f34ff686f322770fec87dcc986576c77fd" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.635887 4984 scope.go:117] "RemoveContainer" containerID="39ac005f3b0418711d3d897077b35efc4095cfe3b629a62736c2db0f861264f1" Jan 30 10:42:22 crc kubenswrapper[4984]: I0130 10:42:22.707223 4984 scope.go:117] "RemoveContainer" containerID="886c26fc093739c495beed5c6f76e0e1f2d0d794ded30c68297ca382924af529" Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.045178 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.059178 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.066421 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.073669 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7vrp9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.080427 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9847-account-create-update-p46tr"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.087449 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qs8g9"] Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.101019 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0be8dd-7b50-43e1-b223-8d5082a0c499" path="/var/lib/kubelet/pods/0b0be8dd-7b50-43e1-b223-8d5082a0c499/volumes" Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.102278 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ce47a3-89a8-45f2-809e-9aaab0e718e2" path="/var/lib/kubelet/pods/61ce47a3-89a8-45f2-809e-9aaab0e718e2/volumes" Jan 30 10:42:26 crc kubenswrapper[4984]: I0130 10:42:26.103171 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24" path="/var/lib/kubelet/pods/bf7ef1a1-61d7-4de4-bd64-6e46e5fcfe24/volumes" Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.029659 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.038328 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.052282 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.066690 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f837-account-create-update-tljj4"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.073317 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xjhtp"] Jan 30 10:42:27 crc kubenswrapper[4984]: I0130 10:42:27.080733 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-32c7-account-create-update-2mdsq"] Jan 30 10:42:28 crc kubenswrapper[4984]: I0130 10:42:28.102472 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e68f06-af93-45d0-bf19-26469cac41f1" path="/var/lib/kubelet/pods/24e68f06-af93-45d0-bf19-26469cac41f1/volumes" Jan 30 10:42:28 crc kubenswrapper[4984]: I0130 10:42:28.103234 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c78c96a-fba2-4de8-ab70-a16d31722959" path="/var/lib/kubelet/pods/3c78c96a-fba2-4de8-ab70-a16d31722959/volumes" Jan 30 10:42:28 crc kubenswrapper[4984]: I0130 10:42:28.103769 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4173473e-6a7e-400a-bc3e-2a22d5ef6cd1" path="/var/lib/kubelet/pods/4173473e-6a7e-400a-bc3e-2a22d5ef6cd1/volumes" Jan 30 10:42:31 crc kubenswrapper[4984]: I0130 10:42:31.090286 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:31 crc kubenswrapper[4984]: E0130 10:42:31.090915 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:42:43 crc kubenswrapper[4984]: I0130 10:42:43.090297 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:42:43 crc kubenswrapper[4984]: I0130 10:42:43.143801 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerID="0a9f391398259516c72ece0ca377a1d28d2d067e8bd53fb4fc4fa3f92e8b395d" exitCode=0 Jan 30 10:42:43 crc kubenswrapper[4984]: I0130 10:42:43.143858 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerDied","Data":"0a9f391398259516c72ece0ca377a1d28d2d067e8bd53fb4fc4fa3f92e8b395d"} Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.154936 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e"} Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.633209 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.698908 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") pod \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.699140 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") pod \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.699311 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") pod \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\" (UID: \"ed90c997-eddb-4afb-ae0d-31dd3ef4c485\") " Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.707467 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2" (OuterVolumeSpecName: "kube-api-access-72sz2") pod "ed90c997-eddb-4afb-ae0d-31dd3ef4c485" (UID: "ed90c997-eddb-4afb-ae0d-31dd3ef4c485"). InnerVolumeSpecName "kube-api-access-72sz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.737265 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory" (OuterVolumeSpecName: "inventory") pod "ed90c997-eddb-4afb-ae0d-31dd3ef4c485" (UID: "ed90c997-eddb-4afb-ae0d-31dd3ef4c485"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.757009 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed90c997-eddb-4afb-ae0d-31dd3ef4c485" (UID: "ed90c997-eddb-4afb-ae0d-31dd3ef4c485"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.801360 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.801513 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72sz2\" (UniqueName: \"kubernetes.io/projected/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-kube-api-access-72sz2\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:44 crc kubenswrapper[4984]: I0130 10:42:44.801572 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed90c997-eddb-4afb-ae0d-31dd3ef4c485-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.168186 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" event={"ID":"ed90c997-eddb-4afb-ae0d-31dd3ef4c485","Type":"ContainerDied","Data":"e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557"} Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.168235 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27238d760f08c68dc880e6cf041362e1eaee229540cfbb4a52c4136dbb39557" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.168235 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.250466 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5"] Jan 30 10:42:45 crc kubenswrapper[4984]: E0130 10:42:45.250881 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.250900 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.251078 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed90c997-eddb-4afb-ae0d-31dd3ef4c485" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.251688 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.254022 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.254208 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.254388 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.259063 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.270116 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5"] Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.311480 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.311589 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.311666 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.413115 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.413302 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.413344 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.417889 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.426890 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.438767 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.602887 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:45 crc kubenswrapper[4984]: I0130 10:42:45.932520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5"] Jan 30 10:42:46 crc kubenswrapper[4984]: I0130 10:42:46.181177 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerStarted","Data":"d54ffc9a1ce5fadca0d94660e4c1b921690202def7ca273571df9a391b864e3d"} Jan 30 10:42:47 crc kubenswrapper[4984]: I0130 10:42:47.214893 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerStarted","Data":"a85fad9eece6221ac595f3a8ec29e117125f3a05c44dc9496af9f4e0d191f1af"} Jan 30 10:42:47 crc kubenswrapper[4984]: I0130 10:42:47.242701 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" podStartSLOduration=1.82201923 podStartE2EDuration="2.242682527s" podCreationTimestamp="2026-01-30 10:42:45 +0000 UTC" firstStartedPulling="2026-01-30 10:42:45.936292763 +0000 UTC m=+1870.502596577" lastFinishedPulling="2026-01-30 10:42:46.35695605 +0000 UTC m=+1870.923259874" observedRunningTime="2026-01-30 10:42:47.233034726 +0000 UTC m=+1871.799338560" watchObservedRunningTime="2026-01-30 10:42:47.242682527 +0000 UTC m=+1871.808986361" Jan 30 10:42:51 crc kubenswrapper[4984]: I0130 10:42:51.262103 4984 generic.go:334] "Generic (PLEG): container finished" podID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerID="a85fad9eece6221ac595f3a8ec29e117125f3a05c44dc9496af9f4e0d191f1af" exitCode=0 Jan 30 10:42:51 crc kubenswrapper[4984]: I0130 10:42:51.262151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerDied","Data":"a85fad9eece6221ac595f3a8ec29e117125f3a05c44dc9496af9f4e0d191f1af"} Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.748230 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.861394 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") pod \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.861441 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") pod \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.861743 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") pod \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\" (UID: \"d0aef065-96aa-4cd6-9069-627c5f97fcc3\") " Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.872582 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv" (OuterVolumeSpecName: "kube-api-access-88sqv") pod "d0aef065-96aa-4cd6-9069-627c5f97fcc3" (UID: "d0aef065-96aa-4cd6-9069-627c5f97fcc3"). InnerVolumeSpecName "kube-api-access-88sqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.897637 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory" (OuterVolumeSpecName: "inventory") pod "d0aef065-96aa-4cd6-9069-627c5f97fcc3" (UID: "d0aef065-96aa-4cd6-9069-627c5f97fcc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.922325 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d0aef065-96aa-4cd6-9069-627c5f97fcc3" (UID: "d0aef065-96aa-4cd6-9069-627c5f97fcc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.964354 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88sqv\" (UniqueName: \"kubernetes.io/projected/d0aef065-96aa-4cd6-9069-627c5f97fcc3-kube-api-access-88sqv\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.964396 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:52 crc kubenswrapper[4984]: I0130 10:42:52.964410 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0aef065-96aa-4cd6-9069-627c5f97fcc3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.284749 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" event={"ID":"d0aef065-96aa-4cd6-9069-627c5f97fcc3","Type":"ContainerDied","Data":"d54ffc9a1ce5fadca0d94660e4c1b921690202def7ca273571df9a391b864e3d"} Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.285018 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d54ffc9a1ce5fadca0d94660e4c1b921690202def7ca273571df9a391b864e3d" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.284787 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.367605 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8"] Jan 30 10:42:53 crc kubenswrapper[4984]: E0130 10:42:53.367995 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.368013 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.368180 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0aef065-96aa-4cd6-9069-627c5f97fcc3" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.368782 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.370844 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.371319 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.372037 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.372307 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.386389 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8"] Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.473733 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.473792 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.473850 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.575412 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.575607 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.575667 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.581820 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.581973 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.608853 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6cgx8\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:53 crc kubenswrapper[4984]: I0130 10:42:53.695207 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:42:54 crc kubenswrapper[4984]: I0130 10:42:54.270530 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8"] Jan 30 10:42:54 crc kubenswrapper[4984]: I0130 10:42:54.298365 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerStarted","Data":"1df449ca040ea672e9a4cc4bb7727ac79e3da1eb4b7407f61952857b503e1e7e"} Jan 30 10:42:55 crc kubenswrapper[4984]: I0130 10:42:55.310088 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerStarted","Data":"cb8c388bbbae2b7c1fb63911f4181dd6ab414387ad4673b99b61a1037666b30a"} Jan 30 10:42:55 crc kubenswrapper[4984]: I0130 10:42:55.332544 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" podStartSLOduration=1.800236569 podStartE2EDuration="2.332524298s" podCreationTimestamp="2026-01-30 10:42:53 +0000 UTC" firstStartedPulling="2026-01-30 10:42:54.274759704 +0000 UTC m=+1878.841063538" lastFinishedPulling="2026-01-30 10:42:54.807047393 +0000 UTC m=+1879.373351267" observedRunningTime="2026-01-30 10:42:55.330833282 +0000 UTC m=+1879.897137106" watchObservedRunningTime="2026-01-30 10:42:55.332524298 +0000 UTC m=+1879.898828132" Jan 30 10:42:56 crc kubenswrapper[4984]: I0130 10:42:56.065083 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:42:56 crc kubenswrapper[4984]: I0130 10:42:56.075895 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5wpxl"] Jan 30 10:42:56 crc kubenswrapper[4984]: I0130 10:42:56.121989 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaa8458-e32e-4a6f-9e67-3e394d9daa32" path="/var/lib/kubelet/pods/deaa8458-e32e-4a6f-9e67-3e394d9daa32/volumes" Jan 30 10:43:19 crc kubenswrapper[4984]: I0130 10:43:19.036286 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:43:19 crc kubenswrapper[4984]: I0130 10:43:19.045032 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hphht"] Jan 30 10:43:20 crc kubenswrapper[4984]: I0130 10:43:20.106498 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c9c509-275d-47bc-81f8-755bab6b2be8" path="/var/lib/kubelet/pods/e9c9c509-275d-47bc-81f8-755bab6b2be8/volumes" Jan 30 10:43:22 crc kubenswrapper[4984]: I0130 10:43:22.949064 4984 scope.go:117] "RemoveContainer" containerID="f6d3ea39520182e990fd0bc6891d62649eeb90ca61d761fe228472651906c15d" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.002922 4984 scope.go:117] "RemoveContainer" containerID="6eda3836ac458742c17eeba0173a28f9e62b42b7dbf4d4f433eb7525f26d90e6" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.029462 4984 scope.go:117] "RemoveContainer" containerID="d94d94153d595e4b9ce76157accc6d01c2cb9f1b145e151fe1e75fe78e9c2a57" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.110113 4984 scope.go:117] "RemoveContainer" containerID="73cbe196d056395ed3b9f37ad8135b6261f4b509ecb1bd1d8585347fdf36d081" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.134140 4984 scope.go:117] "RemoveContainer" containerID="b83b864f9215b1b901d3cd0dc5c544dfe0581fd330c80ad8350dc925278bda90" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.182873 4984 scope.go:117] "RemoveContainer" containerID="f9b3187c82aff853cf22b0038f5d38d1cea29bfe3a85c99f377ce27a24d35342" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.214381 4984 scope.go:117] "RemoveContainer" containerID="55c4ad08202caa288e8d7e5822ac5705c3135e2d86feaa79d8724c0c9dd0784d" Jan 30 10:43:23 crc kubenswrapper[4984]: I0130 10:43:23.230035 4984 scope.go:117] "RemoveContainer" containerID="9cb5d7c891eea50ab9ba8545dcc17cab4c0d194d18b1326a2f9e72c749d5ea5f" Jan 30 10:43:32 crc kubenswrapper[4984]: I0130 10:43:32.646999 4984 generic.go:334] "Generic (PLEG): container finished" podID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerID="cb8c388bbbae2b7c1fb63911f4181dd6ab414387ad4673b99b61a1037666b30a" exitCode=0 Jan 30 10:43:32 crc kubenswrapper[4984]: I0130 10:43:32.647115 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerDied","Data":"cb8c388bbbae2b7c1fb63911f4181dd6ab414387ad4673b99b61a1037666b30a"} Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.136917 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.286058 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") pod \"875c90f8-2855-43ce-993f-fa64c7d92c66\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.286358 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") pod \"875c90f8-2855-43ce-993f-fa64c7d92c66\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.286426 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") pod \"875c90f8-2855-43ce-993f-fa64c7d92c66\" (UID: \"875c90f8-2855-43ce-993f-fa64c7d92c66\") " Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.290858 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh" (OuterVolumeSpecName: "kube-api-access-9qmzh") pod "875c90f8-2855-43ce-993f-fa64c7d92c66" (UID: "875c90f8-2855-43ce-993f-fa64c7d92c66"). InnerVolumeSpecName "kube-api-access-9qmzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.330041 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory" (OuterVolumeSpecName: "inventory") pod "875c90f8-2855-43ce-993f-fa64c7d92c66" (UID: "875c90f8-2855-43ce-993f-fa64c7d92c66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.332599 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "875c90f8-2855-43ce-993f-fa64c7d92c66" (UID: "875c90f8-2855-43ce-993f-fa64c7d92c66"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.388707 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.388753 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/875c90f8-2855-43ce-993f-fa64c7d92c66-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.388768 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qmzh\" (UniqueName: \"kubernetes.io/projected/875c90f8-2855-43ce-993f-fa64c7d92c66-kube-api-access-9qmzh\") on node \"crc\" DevicePath \"\"" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.668608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" event={"ID":"875c90f8-2855-43ce-993f-fa64c7d92c66","Type":"ContainerDied","Data":"1df449ca040ea672e9a4cc4bb7727ac79e3da1eb4b7407f61952857b503e1e7e"} Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.668748 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df449ca040ea672e9a4cc4bb7727ac79e3da1eb4b7407f61952857b503e1e7e" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.668837 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6cgx8" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.824749 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26"] Jan 30 10:43:34 crc kubenswrapper[4984]: E0130 10:43:34.825282 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.825308 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.825559 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="875c90f8-2855-43ce-993f-fa64c7d92c66" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.826354 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.833190 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.833194 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.833541 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.834684 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:43:34 crc kubenswrapper[4984]: I0130 10:43:34.838198 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26"] Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.000859 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.000972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.001097 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.103542 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.103809 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.103988 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.107199 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.115392 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.118376 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blm26\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.151384 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:43:35 crc kubenswrapper[4984]: I0130 10:43:35.765291 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26"] Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.214058 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.684809 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerStarted","Data":"a400c4f66e3a3b9465f76d708b851aa725e6f73cf1be151df20ace1e8ece9c1e"} Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.685435 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerStarted","Data":"7c910ea5dd102522c5d79155c2bd51fd0b9954e92e2175ba805e9766681b1b44"} Jan 30 10:43:36 crc kubenswrapper[4984]: I0130 10:43:36.705492 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" podStartSLOduration=2.267238146 podStartE2EDuration="2.705470936s" podCreationTimestamp="2026-01-30 10:43:34 +0000 UTC" firstStartedPulling="2026-01-30 10:43:35.77348073 +0000 UTC m=+1920.339784554" lastFinishedPulling="2026-01-30 10:43:36.21171352 +0000 UTC m=+1920.778017344" observedRunningTime="2026-01-30 10:43:36.702577488 +0000 UTC m=+1921.268881352" watchObservedRunningTime="2026-01-30 10:43:36.705470936 +0000 UTC m=+1921.271774760" Jan 30 10:43:37 crc kubenswrapper[4984]: I0130 10:43:37.030691 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:43:37 crc kubenswrapper[4984]: I0130 10:43:37.040500 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nvx8g"] Jan 30 10:43:38 crc kubenswrapper[4984]: I0130 10:43:38.099817 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6148a148-07c4-4584-95ff-10d5e5147954" path="/var/lib/kubelet/pods/6148a148-07c4-4584-95ff-10d5e5147954/volumes" Jan 30 10:44:05 crc kubenswrapper[4984]: I0130 10:44:05.043792 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:44:05 crc kubenswrapper[4984]: I0130 10:44:05.054287 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-489nm"] Jan 30 10:44:06 crc kubenswrapper[4984]: I0130 10:44:06.100727 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a005f64f-9ec0-4a4a-b64e-9ae00924dce7" path="/var/lib/kubelet/pods/a005f64f-9ec0-4a4a-b64e-9ae00924dce7/volumes" Jan 30 10:44:22 crc kubenswrapper[4984]: I0130 10:44:22.124888 4984 generic.go:334] "Generic (PLEG): container finished" podID="5ca6f868-9db4-483a-bea5-dc471b160721" containerID="a400c4f66e3a3b9465f76d708b851aa725e6f73cf1be151df20ace1e8ece9c1e" exitCode=0 Jan 30 10:44:22 crc kubenswrapper[4984]: I0130 10:44:22.124966 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerDied","Data":"a400c4f66e3a3b9465f76d708b851aa725e6f73cf1be151df20ace1e8ece9c1e"} Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.384885 4984 scope.go:117] "RemoveContainer" containerID="01f24060ed65c8e2bd6475cb81b1d352cdc388008c24396c142500998835d3df" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.453765 4984 scope.go:117] "RemoveContainer" containerID="899b94de134f9ceca80081ff737a83cc02c723d317671d240f22cc01fff73eb3" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.617736 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.788911 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") pod \"5ca6f868-9db4-483a-bea5-dc471b160721\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.789410 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") pod \"5ca6f868-9db4-483a-bea5-dc471b160721\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.789562 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") pod \"5ca6f868-9db4-483a-bea5-dc471b160721\" (UID: \"5ca6f868-9db4-483a-bea5-dc471b160721\") " Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.797113 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd" (OuterVolumeSpecName: "kube-api-access-wtcjd") pod "5ca6f868-9db4-483a-bea5-dc471b160721" (UID: "5ca6f868-9db4-483a-bea5-dc471b160721"). InnerVolumeSpecName "kube-api-access-wtcjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.816511 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ca6f868-9db4-483a-bea5-dc471b160721" (UID: "5ca6f868-9db4-483a-bea5-dc471b160721"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.832084 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory" (OuterVolumeSpecName: "inventory") pod "5ca6f868-9db4-483a-bea5-dc471b160721" (UID: "5ca6f868-9db4-483a-bea5-dc471b160721"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.891938 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtcjd\" (UniqueName: \"kubernetes.io/projected/5ca6f868-9db4-483a-bea5-dc471b160721-kube-api-access-wtcjd\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.892131 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:23 crc kubenswrapper[4984]: I0130 10:44:23.892147 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ca6f868-9db4-483a-bea5-dc471b160721-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.143432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" event={"ID":"5ca6f868-9db4-483a-bea5-dc471b160721","Type":"ContainerDied","Data":"7c910ea5dd102522c5d79155c2bd51fd0b9954e92e2175ba805e9766681b1b44"} Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.143503 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c910ea5dd102522c5d79155c2bd51fd0b9954e92e2175ba805e9766681b1b44" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.143567 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blm26" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.223696 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ds8rj"] Jan 30 10:44:24 crc kubenswrapper[4984]: E0130 10:44:24.224078 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6f868-9db4-483a-bea5-dc471b160721" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.224095 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6f868-9db4-483a-bea5-dc471b160721" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.224310 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6f868-9db4-483a-bea5-dc471b160721" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.224973 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.227668 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.228050 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.228284 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.239587 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.240588 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ds8rj"] Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.402179 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.402231 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.402467 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.503803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.503937 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.503966 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.507902 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.520830 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.535050 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"ssh-known-hosts-edpm-deployment-ds8rj\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.543336 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.903569 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ds8rj"] Jan 30 10:44:24 crc kubenswrapper[4984]: I0130 10:44:24.906029 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:44:25 crc kubenswrapper[4984]: I0130 10:44:25.158485 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerStarted","Data":"fcc5131cc8bfa1c54924096fd3c2646015acb95b4b2714383282a62d5b4e58ff"} Jan 30 10:44:28 crc kubenswrapper[4984]: I0130 10:44:28.186554 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerStarted","Data":"91b3b2918474900e35a44faff448ecf588a4039058dd642442be233ed68bf211"} Jan 30 10:44:28 crc kubenswrapper[4984]: I0130 10:44:28.202928 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" podStartSLOduration=1.7488395799999998 podStartE2EDuration="4.202904382s" podCreationTimestamp="2026-01-30 10:44:24 +0000 UTC" firstStartedPulling="2026-01-30 10:44:24.905843392 +0000 UTC m=+1969.472147216" lastFinishedPulling="2026-01-30 10:44:27.359908184 +0000 UTC m=+1971.926212018" observedRunningTime="2026-01-30 10:44:28.200724053 +0000 UTC m=+1972.767027887" watchObservedRunningTime="2026-01-30 10:44:28.202904382 +0000 UTC m=+1972.769208226" Jan 30 10:44:35 crc kubenswrapper[4984]: I0130 10:44:35.250195 4984 generic.go:334] "Generic (PLEG): container finished" podID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerID="91b3b2918474900e35a44faff448ecf588a4039058dd642442be233ed68bf211" exitCode=0 Jan 30 10:44:35 crc kubenswrapper[4984]: I0130 10:44:35.250322 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerDied","Data":"91b3b2918474900e35a44faff448ecf588a4039058dd642442be233ed68bf211"} Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.691636 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.723263 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") pod \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.723318 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") pod \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.723351 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") pod \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\" (UID: \"1e567c3d-d9b0-4be3-ad02-21a342ce33fd\") " Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.738078 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j" (OuterVolumeSpecName: "kube-api-access-hdx5j") pod "1e567c3d-d9b0-4be3-ad02-21a342ce33fd" (UID: "1e567c3d-d9b0-4be3-ad02-21a342ce33fd"). InnerVolumeSpecName "kube-api-access-hdx5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.775889 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e567c3d-d9b0-4be3-ad02-21a342ce33fd" (UID: "1e567c3d-d9b0-4be3-ad02-21a342ce33fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.781612 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1e567c3d-d9b0-4be3-ad02-21a342ce33fd" (UID: "1e567c3d-d9b0-4be3-ad02-21a342ce33fd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.825428 4984 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.825467 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:36 crc kubenswrapper[4984]: I0130 10:44:36.825481 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdx5j\" (UniqueName: \"kubernetes.io/projected/1e567c3d-d9b0-4be3-ad02-21a342ce33fd-kube-api-access-hdx5j\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.293633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" event={"ID":"1e567c3d-d9b0-4be3-ad02-21a342ce33fd","Type":"ContainerDied","Data":"fcc5131cc8bfa1c54924096fd3c2646015acb95b4b2714383282a62d5b4e58ff"} Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.293674 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ds8rj" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.293694 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc5131cc8bfa1c54924096fd3c2646015acb95b4b2714383282a62d5b4e58ff" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.366102 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn"] Jan 30 10:44:37 crc kubenswrapper[4984]: E0130 10:44:37.366619 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerName="ssh-known-hosts-edpm-deployment" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.366634 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerName="ssh-known-hosts-edpm-deployment" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.366855 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e567c3d-d9b0-4be3-ad02-21a342ce33fd" containerName="ssh-known-hosts-edpm-deployment" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.367563 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.372963 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.373384 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.373587 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.373654 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.376768 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn"] Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.438834 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.438908 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.438978 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.543001 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.543088 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.543126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.551053 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.552534 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.574624 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7fxcn\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:37 crc kubenswrapper[4984]: I0130 10:44:37.766900 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:38 crc kubenswrapper[4984]: W0130 10:44:38.306520 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb337ec46_c5ba_4b83_91f7_ad4b826d9595.slice/crio-5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e WatchSource:0}: Error finding container 5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e: Status 404 returned error can't find the container with id 5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e Jan 30 10:44:38 crc kubenswrapper[4984]: I0130 10:44:38.310579 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn"] Jan 30 10:44:39 crc kubenswrapper[4984]: I0130 10:44:39.311107 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerStarted","Data":"5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e"} Jan 30 10:44:40 crc kubenswrapper[4984]: I0130 10:44:40.321590 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerStarted","Data":"d17e639236d21577e05316cdaa8c13e0530cba019cd6740ff4bc7910d13ac8fb"} Jan 30 10:44:40 crc kubenswrapper[4984]: I0130 10:44:40.351132 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" podStartSLOduration=2.743996561 podStartE2EDuration="3.351110561s" podCreationTimestamp="2026-01-30 10:44:37 +0000 UTC" firstStartedPulling="2026-01-30 10:44:38.308921699 +0000 UTC m=+1982.875225523" lastFinishedPulling="2026-01-30 10:44:38.916035689 +0000 UTC m=+1983.482339523" observedRunningTime="2026-01-30 10:44:40.34363299 +0000 UTC m=+1984.909936834" watchObservedRunningTime="2026-01-30 10:44:40.351110561 +0000 UTC m=+1984.917414395" Jan 30 10:44:47 crc kubenswrapper[4984]: I0130 10:44:47.390538 4984 generic.go:334] "Generic (PLEG): container finished" podID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerID="d17e639236d21577e05316cdaa8c13e0530cba019cd6740ff4bc7910d13ac8fb" exitCode=0 Jan 30 10:44:47 crc kubenswrapper[4984]: I0130 10:44:47.390631 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerDied","Data":"d17e639236d21577e05316cdaa8c13e0530cba019cd6740ff4bc7910d13ac8fb"} Jan 30 10:44:48 crc kubenswrapper[4984]: I0130 10:44:48.915689 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.077327 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") pod \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.077546 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") pod \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.077763 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") pod \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\" (UID: \"b337ec46-c5ba-4b83-91f7-ad4b826d9595\") " Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.086750 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v" (OuterVolumeSpecName: "kube-api-access-gt68v") pod "b337ec46-c5ba-4b83-91f7-ad4b826d9595" (UID: "b337ec46-c5ba-4b83-91f7-ad4b826d9595"). InnerVolumeSpecName "kube-api-access-gt68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.113845 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b337ec46-c5ba-4b83-91f7-ad4b826d9595" (UID: "b337ec46-c5ba-4b83-91f7-ad4b826d9595"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.138024 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory" (OuterVolumeSpecName: "inventory") pod "b337ec46-c5ba-4b83-91f7-ad4b826d9595" (UID: "b337ec46-c5ba-4b83-91f7-ad4b826d9595"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.180858 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.180895 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b337ec46-c5ba-4b83-91f7-ad4b826d9595-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.180909 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt68v\" (UniqueName: \"kubernetes.io/projected/b337ec46-c5ba-4b83-91f7-ad4b826d9595-kube-api-access-gt68v\") on node \"crc\" DevicePath \"\"" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.415434 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" event={"ID":"b337ec46-c5ba-4b83-91f7-ad4b826d9595","Type":"ContainerDied","Data":"5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e"} Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.415496 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e0590d3e5e34c3c4d604c7a6f75c2c95c77009ed877aeb15bf9f47700c1525e" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.415607 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7fxcn" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.527076 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45"] Jan 30 10:44:49 crc kubenswrapper[4984]: E0130 10:44:49.527634 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.527665 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.527987 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b337ec46-c5ba-4b83-91f7-ad4b826d9595" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.528930 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.533322 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.533585 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.533795 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.541418 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.547984 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45"] Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.696972 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.697093 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.697225 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.799219 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.799342 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.799424 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.805556 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.807310 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.819851 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:49 crc kubenswrapper[4984]: I0130 10:44:49.860924 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:44:50 crc kubenswrapper[4984]: I0130 10:44:50.435589 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45"] Jan 30 10:44:51 crc kubenswrapper[4984]: I0130 10:44:51.443902 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerStarted","Data":"8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254"} Jan 30 10:44:52 crc kubenswrapper[4984]: I0130 10:44:52.454166 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerStarted","Data":"c3dc68b4fbe0d7753c9064d2733e6fc5c7251dd1c447e7d97f3ad783a81ee018"} Jan 30 10:44:52 crc kubenswrapper[4984]: I0130 10:44:52.471478 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" podStartSLOduration=2.663590802 podStartE2EDuration="3.471462562s" podCreationTimestamp="2026-01-30 10:44:49 +0000 UTC" firstStartedPulling="2026-01-30 10:44:50.443592405 +0000 UTC m=+1995.009896229" lastFinishedPulling="2026-01-30 10:44:51.251464125 +0000 UTC m=+1995.817767989" observedRunningTime="2026-01-30 10:44:52.468938604 +0000 UTC m=+1997.035242428" watchObservedRunningTime="2026-01-30 10:44:52.471462562 +0000 UTC m=+1997.037766386" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.146113 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb"] Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.148850 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.153227 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.153307 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.158051 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb"] Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.343294 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.343563 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.343660 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.445747 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.445895 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.446022 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.447064 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.455352 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.483099 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"collect-profiles-29496165-4c4mb\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:00 crc kubenswrapper[4984]: I0130 10:45:00.488211 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.010390 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb"] Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.544240 4984 generic.go:334] "Generic (PLEG): container finished" podID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerID="c3dc68b4fbe0d7753c9064d2733e6fc5c7251dd1c447e7d97f3ad783a81ee018" exitCode=0 Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.544308 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerDied","Data":"c3dc68b4fbe0d7753c9064d2733e6fc5c7251dd1c447e7d97f3ad783a81ee018"} Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.548175 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerStarted","Data":"cca504e9dee208e1eb7fe8fe7be1f987f4e4057900a16113540eac221bbbcaa7"} Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.548209 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerStarted","Data":"bc3a18901f1569393bfcf2d09999123881433a6bca4c55d907f059140dad5e74"} Jan 30 10:45:01 crc kubenswrapper[4984]: I0130 10:45:01.583879 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" podStartSLOduration=1.5838605019999998 podStartE2EDuration="1.583860502s" podCreationTimestamp="2026-01-30 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 10:45:01.574755657 +0000 UTC m=+2006.141059491" watchObservedRunningTime="2026-01-30 10:45:01.583860502 +0000 UTC m=+2006.150164336" Jan 30 10:45:02 crc kubenswrapper[4984]: I0130 10:45:02.560528 4984 generic.go:334] "Generic (PLEG): container finished" podID="c04603fc-717d-4780-886e-4e449999ca6c" containerID="cca504e9dee208e1eb7fe8fe7be1f987f4e4057900a16113540eac221bbbcaa7" exitCode=0 Jan 30 10:45:02 crc kubenswrapper[4984]: I0130 10:45:02.561618 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerDied","Data":"cca504e9dee208e1eb7fe8fe7be1f987f4e4057900a16113540eac221bbbcaa7"} Jan 30 10:45:02 crc kubenswrapper[4984]: I0130 10:45:02.973360 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.001090 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.001139 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.097540 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") pod \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.097638 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") pod \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.097699 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") pod \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\" (UID: \"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78\") " Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.103711 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv" (OuterVolumeSpecName: "kube-api-access-4fjgv") pod "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" (UID: "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78"). InnerVolumeSpecName "kube-api-access-4fjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.129958 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" (UID: "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.132120 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory" (OuterVolumeSpecName: "inventory") pod "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" (UID: "b6b5ab38-6c9b-4526-bbee-d3a4c460ea78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.200232 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.200314 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjgv\" (UniqueName: \"kubernetes.io/projected/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-kube-api-access-4fjgv\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.200324 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6b5ab38-6c9b-4526-bbee-d3a4c460ea78-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.589696 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.590301 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45" event={"ID":"b6b5ab38-6c9b-4526-bbee-d3a4c460ea78","Type":"ContainerDied","Data":"8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254"} Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.590346 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.664396 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9"] Jan 30 10:45:03 crc kubenswrapper[4984]: E0130 10:45:03.665050 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.665073 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.665355 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b5ab38-6c9b-4526-bbee-d3a4c460ea78" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.666101 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.668360 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.668673 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.668926 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.669539 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.669687 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.669908 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.670020 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.670140 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.722472 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9"] Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.809891 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.809957 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810028 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810052 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810082 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810111 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810134 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810461 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810514 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810576 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810717 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.810882 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: E0130 10:45:03.816798 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b5ab38_6c9b_4526_bbee_d3a4c460ea78.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b5ab38_6c9b_4526_bbee_d3a4c460ea78.slice/crio-8f191e7b36fb17ac0140ff17fc544daf58357b9629e194861fc73aa23f34f254\": RecentStats: unable to find data in memory cache]" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915044 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915104 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915154 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915200 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915268 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915322 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915373 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915410 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915466 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915529 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915639 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915752 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915828 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.915876 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.920941 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.920974 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.921525 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.921945 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.922421 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.922529 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.923222 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.923502 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.923541 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.924087 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.924170 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.924339 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.932135 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:03 crc kubenswrapper[4984]: I0130 10:45:03.939748 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.014730 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.018755 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.120195 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") pod \"c04603fc-717d-4780-886e-4e449999ca6c\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.120671 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") pod \"c04603fc-717d-4780-886e-4e449999ca6c\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.120818 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") pod \"c04603fc-717d-4780-886e-4e449999ca6c\" (UID: \"c04603fc-717d-4780-886e-4e449999ca6c\") " Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.121051 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c04603fc-717d-4780-886e-4e449999ca6c" (UID: "c04603fc-717d-4780-886e-4e449999ca6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.121547 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c04603fc-717d-4780-886e-4e449999ca6c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.124612 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c04603fc-717d-4780-886e-4e449999ca6c" (UID: "c04603fc-717d-4780-886e-4e449999ca6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.127977 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p" (OuterVolumeSpecName: "kube-api-access-6g74p") pod "c04603fc-717d-4780-886e-4e449999ca6c" (UID: "c04603fc-717d-4780-886e-4e449999ca6c"). InnerVolumeSpecName "kube-api-access-6g74p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.223891 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g74p\" (UniqueName: \"kubernetes.io/projected/c04603fc-717d-4780-886e-4e449999ca6c-kube-api-access-6g74p\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.223949 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c04603fc-717d-4780-886e-4e449999ca6c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.563740 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9"] Jan 30 10:45:04 crc kubenswrapper[4984]: W0130 10:45:04.570737 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908eb334_fac2_41ed_96d6_d7c80f8e98b3.slice/crio-455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba WatchSource:0}: Error finding container 455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba: Status 404 returned error can't find the container with id 455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.604108 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerStarted","Data":"455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba"} Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.607421 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.611348 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496165-4c4mb" event={"ID":"c04603fc-717d-4780-886e-4e449999ca6c","Type":"ContainerDied","Data":"bc3a18901f1569393bfcf2d09999123881433a6bca4c55d907f059140dad5e74"} Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.611431 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3a18901f1569393bfcf2d09999123881433a6bca4c55d907f059140dad5e74" Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.656651 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:45:04 crc kubenswrapper[4984]: I0130 10:45:04.664060 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496120-p5sk8"] Jan 30 10:45:05 crc kubenswrapper[4984]: I0130 10:45:05.622109 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerStarted","Data":"e4f9b270c703ea74092e0144e5aec51be20128d2d9595e52b0665ee02d376f8a"} Jan 30 10:45:05 crc kubenswrapper[4984]: I0130 10:45:05.645979 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" podStartSLOduration=2.082902925 podStartE2EDuration="2.645964798s" podCreationTimestamp="2026-01-30 10:45:03 +0000 UTC" firstStartedPulling="2026-01-30 10:45:04.574813093 +0000 UTC m=+2009.141116917" lastFinishedPulling="2026-01-30 10:45:05.137874916 +0000 UTC m=+2009.704178790" observedRunningTime="2026-01-30 10:45:05.642551166 +0000 UTC m=+2010.208855020" watchObservedRunningTime="2026-01-30 10:45:05.645964798 +0000 UTC m=+2010.212268622" Jan 30 10:45:06 crc kubenswrapper[4984]: I0130 10:45:06.102387 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbdde9dd-69cf-405d-9143-1739e3acbdde" path="/var/lib/kubelet/pods/fbdde9dd-69cf-405d-9143-1739e3acbdde/volumes" Jan 30 10:45:23 crc kubenswrapper[4984]: I0130 10:45:23.565519 4984 scope.go:117] "RemoveContainer" containerID="b0a94db102107430e1a69e0b74ea3c70e83060546d2f77d6bc452f21055f639a" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.223304 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:45:28 crc kubenswrapper[4984]: E0130 10:45:28.224399 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04603fc-717d-4780-886e-4e449999ca6c" containerName="collect-profiles" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.224418 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04603fc-717d-4780-886e-4e449999ca6c" containerName="collect-profiles" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.224685 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04603fc-717d-4780-886e-4e449999ca6c" containerName="collect-profiles" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.226658 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.235636 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.271525 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.271589 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.271673 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.374304 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.374368 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.374451 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.375352 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.375351 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.394899 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"redhat-operators-vvmbl\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:28 crc kubenswrapper[4984]: I0130 10:45:28.549028 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.110077 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.853420 4984 generic.go:334] "Generic (PLEG): container finished" podID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" exitCode=0 Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.853540 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f"} Jan 30 10:45:29 crc kubenswrapper[4984]: I0130 10:45:29.853740 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerStarted","Data":"24a1148d15bee715c556a07badff1472f1bb6f79211e4948aa32e4f198a42f23"} Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.021980 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.024943 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.051792 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.118519 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.118591 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.118663 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.220986 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.221926 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.222058 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.222563 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.222614 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.250960 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"certified-operators-7b29f\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.346500 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.623277 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.625318 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.648862 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.732074 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.732653 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.732915 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.825736 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834223 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834296 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834434 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834792 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.834874 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.854218 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"community-operators-vxbxn\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.889566 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerStarted","Data":"a6b266db4cc117a7bc14e19332bd11fa3d2527d71ca0df6e62ce92ee33821566"} Jan 30 10:45:30 crc kubenswrapper[4984]: I0130 10:45:30.952350 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:31 crc kubenswrapper[4984]: I0130 10:45:31.892340 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:31 crc kubenswrapper[4984]: W0130 10:45:31.901922 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod144fba12_676d_457b_83f6_6195f089a240.slice/crio-e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5 WatchSource:0}: Error finding container e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5: Status 404 returned error can't find the container with id e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5 Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.906320 4984 generic.go:334] "Generic (PLEG): container finished" podID="144fba12-676d-457b-83f6-6195f089a240" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" exitCode=0 Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.906556 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5"} Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.906687 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerStarted","Data":"e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5"} Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.908475 4984 generic.go:334] "Generic (PLEG): container finished" podID="86db9413-efcb-4f87-8605-317f50fb468d" containerID="54816c0cf5b8eb3e710c434a7440b8072cfb0783b73c9b74be19869c4c444e35" exitCode=0 Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.908559 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"54816c0cf5b8eb3e710c434a7440b8072cfb0783b73c9b74be19869c4c444e35"} Jan 30 10:45:32 crc kubenswrapper[4984]: I0130 10:45:32.915161 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerStarted","Data":"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa"} Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.000695 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.000757 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.923859 4984 generic.go:334] "Generic (PLEG): container finished" podID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" exitCode=0 Jan 30 10:45:33 crc kubenswrapper[4984]: I0130 10:45:33.923906 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa"} Jan 30 10:45:34 crc kubenswrapper[4984]: I0130 10:45:34.934607 4984 generic.go:334] "Generic (PLEG): container finished" podID="86db9413-efcb-4f87-8605-317f50fb468d" containerID="de219b46efc681590dfc9f6c663921083e34944cc19d08e21c367c0cf53ca7e4" exitCode=0 Jan 30 10:45:34 crc kubenswrapper[4984]: I0130 10:45:34.934713 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"de219b46efc681590dfc9f6c663921083e34944cc19d08e21c367c0cf53ca7e4"} Jan 30 10:45:36 crc kubenswrapper[4984]: I0130 10:45:36.976949 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerStarted","Data":"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9"} Jan 30 10:45:38 crc kubenswrapper[4984]: I0130 10:45:38.005906 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvmbl" podStartSLOduration=5.184509049 podStartE2EDuration="10.005891115s" podCreationTimestamp="2026-01-30 10:45:28 +0000 UTC" firstStartedPulling="2026-01-30 10:45:29.854954144 +0000 UTC m=+2034.421257968" lastFinishedPulling="2026-01-30 10:45:34.6763362 +0000 UTC m=+2039.242640034" observedRunningTime="2026-01-30 10:45:38.00312578 +0000 UTC m=+2042.569429604" watchObservedRunningTime="2026-01-30 10:45:38.005891115 +0000 UTC m=+2042.572194939" Jan 30 10:45:38 crc kubenswrapper[4984]: I0130 10:45:38.549903 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:38 crc kubenswrapper[4984]: I0130 10:45:38.549982 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:45:39 crc kubenswrapper[4984]: I0130 10:45:39.600419 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:45:39 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:45:39 crc kubenswrapper[4984]: > Jan 30 10:45:40 crc kubenswrapper[4984]: I0130 10:45:40.008173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerStarted","Data":"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a"} Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.031765 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerStarted","Data":"ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7"} Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.043673 4984 generic.go:334] "Generic (PLEG): container finished" podID="144fba12-676d-457b-83f6-6195f089a240" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" exitCode=0 Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.043810 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a"} Jan 30 10:45:42 crc kubenswrapper[4984]: I0130 10:45:42.081107 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7b29f" podStartSLOduration=5.016924191 podStartE2EDuration="13.081086834s" podCreationTimestamp="2026-01-30 10:45:29 +0000 UTC" firstStartedPulling="2026-01-30 10:45:32.911701637 +0000 UTC m=+2037.478005471" lastFinishedPulling="2026-01-30 10:45:40.97586428 +0000 UTC m=+2045.542168114" observedRunningTime="2026-01-30 10:45:42.062624726 +0000 UTC m=+2046.628928590" watchObservedRunningTime="2026-01-30 10:45:42.081086834 +0000 UTC m=+2046.647390668" Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.053693 4984 generic.go:334] "Generic (PLEG): container finished" podID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerID="e4f9b270c703ea74092e0144e5aec51be20128d2d9595e52b0665ee02d376f8a" exitCode=0 Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.053992 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerDied","Data":"e4f9b270c703ea74092e0144e5aec51be20128d2d9595e52b0665ee02d376f8a"} Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.057071 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerStarted","Data":"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428"} Jan 30 10:45:43 crc kubenswrapper[4984]: I0130 10:45:43.105628 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxbxn" podStartSLOduration=4.376833429 podStartE2EDuration="13.105612113s" podCreationTimestamp="2026-01-30 10:45:30 +0000 UTC" firstStartedPulling="2026-01-30 10:45:33.925293271 +0000 UTC m=+2038.491597105" lastFinishedPulling="2026-01-30 10:45:42.654071955 +0000 UTC m=+2047.220375789" observedRunningTime="2026-01-30 10:45:43.100754472 +0000 UTC m=+2047.667058296" watchObservedRunningTime="2026-01-30 10:45:43.105612113 +0000 UTC m=+2047.671915937" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.479990 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611267 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611396 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611430 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611489 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611525 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611558 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611639 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611745 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611777 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611805 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611834 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611864 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611887 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.611911 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") pod \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\" (UID: \"908eb334-fac2-41ed-96d6-d7c80f8e98b3\") " Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.618553 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.619749 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.621610 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628709 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628731 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628778 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628824 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk" (OuterVolumeSpecName: "kube-api-access-q9xfk") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "kube-api-access-q9xfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628866 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.628843 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.629660 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.630862 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.646845 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.655470 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory" (OuterVolumeSpecName: "inventory") pod "908eb334-fac2-41ed-96d6-d7c80f8e98b3" (UID: "908eb334-fac2-41ed-96d6-d7c80f8e98b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713536 4984 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713574 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713588 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713597 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713609 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713618 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713627 4984 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713636 4984 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713644 4984 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713652 4984 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713660 4984 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713670 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/908eb334-fac2-41ed-96d6-d7c80f8e98b3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713678 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:44 crc kubenswrapper[4984]: I0130 10:45:44.713713 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9xfk\" (UniqueName: \"kubernetes.io/projected/908eb334-fac2-41ed-96d6-d7c80f8e98b3-kube-api-access-q9xfk\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.076343 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" event={"ID":"908eb334-fac2-41ed-96d6-d7c80f8e98b3","Type":"ContainerDied","Data":"455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba"} Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.076616 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455ffc8d004f501283b46de36a165c4a4c856e96260f75386dc1300937ebb0ba" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.076684 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.239765 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6"] Jan 30 10:45:45 crc kubenswrapper[4984]: E0130 10:45:45.240194 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.240214 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.240402 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="908eb334-fac2-41ed-96d6-d7c80f8e98b3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.241022 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.246710 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.247341 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.247658 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.248059 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.248423 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.249790 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6"] Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324600 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324787 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324933 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.324989 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.325341 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427631 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427776 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427827 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.427890 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.428027 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.429305 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.436661 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.438906 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.440713 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.449292 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-57hv6\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:45 crc kubenswrapper[4984]: I0130 10:45:45.612203 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:45:46 crc kubenswrapper[4984]: I0130 10:45:46.358814 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6"] Jan 30 10:45:47 crc kubenswrapper[4984]: I0130 10:45:47.098039 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerStarted","Data":"5441f0d6bae31f22da1ba983066bd904726100824475ad7b233ba6ccd9255c43"} Jan 30 10:45:49 crc kubenswrapper[4984]: I0130 10:45:49.124456 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerStarted","Data":"3410426cef37882e6e66b81f19bf117fcd08a6957b5586575d68c7a3a2e02ae8"} Jan 30 10:45:49 crc kubenswrapper[4984]: I0130 10:45:49.154436 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" podStartSLOduration=2.462144813 podStartE2EDuration="4.154418686s" podCreationTimestamp="2026-01-30 10:45:45 +0000 UTC" firstStartedPulling="2026-01-30 10:45:46.354840563 +0000 UTC m=+2050.921144377" lastFinishedPulling="2026-01-30 10:45:48.047114396 +0000 UTC m=+2052.613418250" observedRunningTime="2026-01-30 10:45:49.15085574 +0000 UTC m=+2053.717159584" watchObservedRunningTime="2026-01-30 10:45:49.154418686 +0000 UTC m=+2053.720722520" Jan 30 10:45:49 crc kubenswrapper[4984]: I0130 10:45:49.634769 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:45:49 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:45:49 crc kubenswrapper[4984]: > Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.347360 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.347780 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.412512 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.952936 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:50 crc kubenswrapper[4984]: I0130 10:45:50.953383 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:51 crc kubenswrapper[4984]: I0130 10:45:51.037784 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:51 crc kubenswrapper[4984]: I0130 10:45:51.233418 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:51 crc kubenswrapper[4984]: I0130 10:45:51.276970 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:52 crc kubenswrapper[4984]: I0130 10:45:52.296418 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:53 crc kubenswrapper[4984]: I0130 10:45:53.698732 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:53 crc kubenswrapper[4984]: I0130 10:45:53.699392 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7b29f" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" containerID="cri-o://ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7" gracePeriod=2 Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.184629 4984 generic.go:334] "Generic (PLEG): container finished" podID="86db9413-efcb-4f87-8605-317f50fb468d" containerID="ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7" exitCode=0 Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.184976 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7"} Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.185039 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7b29f" event={"ID":"86db9413-efcb-4f87-8605-317f50fb468d","Type":"ContainerDied","Data":"a6b266db4cc117a7bc14e19332bd11fa3d2527d71ca0df6e62ce92ee33821566"} Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.185056 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b266db4cc117a7bc14e19332bd11fa3d2527d71ca0df6e62ce92ee33821566" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.185038 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vxbxn" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" containerID="cri-o://d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" gracePeriod=2 Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.367424 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.431309 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") pod \"86db9413-efcb-4f87-8605-317f50fb468d\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.431351 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") pod \"86db9413-efcb-4f87-8605-317f50fb468d\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.431464 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") pod \"86db9413-efcb-4f87-8605-317f50fb468d\" (UID: \"86db9413-efcb-4f87-8605-317f50fb468d\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.432680 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities" (OuterVolumeSpecName: "utilities") pod "86db9413-efcb-4f87-8605-317f50fb468d" (UID: "86db9413-efcb-4f87-8605-317f50fb468d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.439285 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx" (OuterVolumeSpecName: "kube-api-access-c7mmx") pod "86db9413-efcb-4f87-8605-317f50fb468d" (UID: "86db9413-efcb-4f87-8605-317f50fb468d"). InnerVolumeSpecName "kube-api-access-c7mmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.513958 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86db9413-efcb-4f87-8605-317f50fb468d" (UID: "86db9413-efcb-4f87-8605-317f50fb468d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.533572 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.533605 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86db9413-efcb-4f87-8605-317f50fb468d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.533617 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7mmx\" (UniqueName: \"kubernetes.io/projected/86db9413-efcb-4f87-8605-317f50fb468d-kube-api-access-c7mmx\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.585842 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.736683 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") pod \"144fba12-676d-457b-83f6-6195f089a240\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.736788 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") pod \"144fba12-676d-457b-83f6-6195f089a240\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.737007 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") pod \"144fba12-676d-457b-83f6-6195f089a240\" (UID: \"144fba12-676d-457b-83f6-6195f089a240\") " Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.737777 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities" (OuterVolumeSpecName: "utilities") pod "144fba12-676d-457b-83f6-6195f089a240" (UID: "144fba12-676d-457b-83f6-6195f089a240"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.737889 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.740584 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h" (OuterVolumeSpecName: "kube-api-access-8sn6h") pod "144fba12-676d-457b-83f6-6195f089a240" (UID: "144fba12-676d-457b-83f6-6195f089a240"). InnerVolumeSpecName "kube-api-access-8sn6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.814727 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "144fba12-676d-457b-83f6-6195f089a240" (UID: "144fba12-676d-457b-83f6-6195f089a240"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.839237 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sn6h\" (UniqueName: \"kubernetes.io/projected/144fba12-676d-457b-83f6-6195f089a240-kube-api-access-8sn6h\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:54 crc kubenswrapper[4984]: I0130 10:45:54.839288 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144fba12-676d-457b-83f6-6195f089a240-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.193852 4984 generic.go:334] "Generic (PLEG): container finished" podID="144fba12-676d-457b-83f6-6195f089a240" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" exitCode=0 Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.193934 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7b29f" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194430 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxbxn" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194430 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428"} Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194482 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxbxn" event={"ID":"144fba12-676d-457b-83f6-6195f089a240","Type":"ContainerDied","Data":"e0790b09b76fa8c67704de17272248590d8716f4a19755428a1d87c46c0ec3f5"} Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.194501 4984 scope.go:117] "RemoveContainer" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.230682 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.231095 4984 scope.go:117] "RemoveContainer" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.254316 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vxbxn"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.256503 4984 scope.go:117] "RemoveContainer" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.263915 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.273884 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7b29f"] Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.304745 4984 scope.go:117] "RemoveContainer" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" Jan 30 10:45:55 crc kubenswrapper[4984]: E0130 10:45:55.305279 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428\": container with ID starting with d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428 not found: ID does not exist" containerID="d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305317 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428"} err="failed to get container status \"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428\": rpc error: code = NotFound desc = could not find container \"d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428\": container with ID starting with d11a8c9ad2e6c6512b224277922758a37a7e78e7b319b70b069ce891593da428 not found: ID does not exist" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305337 4984 scope.go:117] "RemoveContainer" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" Jan 30 10:45:55 crc kubenswrapper[4984]: E0130 10:45:55.305926 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a\": container with ID starting with f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a not found: ID does not exist" containerID="f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305947 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a"} err="failed to get container status \"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a\": rpc error: code = NotFound desc = could not find container \"f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a\": container with ID starting with f8ca4fee1d3b3353d1c8307ee2ee92d1e77a726e0d5c4b9430f551f10bbef30a not found: ID does not exist" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.305959 4984 scope.go:117] "RemoveContainer" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" Jan 30 10:45:55 crc kubenswrapper[4984]: E0130 10:45:55.306164 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5\": container with ID starting with aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5 not found: ID does not exist" containerID="aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5" Jan 30 10:45:55 crc kubenswrapper[4984]: I0130 10:45:55.306191 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5"} err="failed to get container status \"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5\": rpc error: code = NotFound desc = could not find container \"aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5\": container with ID starting with aa9d33963af6cdd06640b7d32445ecce8474b6ea6d29d1a01ee30f17b09df2b5 not found: ID does not exist" Jan 30 10:45:56 crc kubenswrapper[4984]: I0130 10:45:56.102498 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144fba12-676d-457b-83f6-6195f089a240" path="/var/lib/kubelet/pods/144fba12-676d-457b-83f6-6195f089a240/volumes" Jan 30 10:45:56 crc kubenswrapper[4984]: I0130 10:45:56.103881 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86db9413-efcb-4f87-8605-317f50fb468d" path="/var/lib/kubelet/pods/86db9413-efcb-4f87-8605-317f50fb468d/volumes" Jan 30 10:45:59 crc kubenswrapper[4984]: I0130 10:45:59.606480 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:45:59 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:45:59 crc kubenswrapper[4984]: > Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.001353 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.002154 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.002281 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.003942 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.004066 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e" gracePeriod=600 Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.265284 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e" exitCode=0 Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.265362 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e"} Jan 30 10:46:03 crc kubenswrapper[4984]: I0130 10:46:03.265685 4984 scope.go:117] "RemoveContainer" containerID="f53d78227b5d7ef23278408152a4340b083bce9832582a11b0a8443571ddb94a" Jan 30 10:46:04 crc kubenswrapper[4984]: I0130 10:46:04.276812 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988"} Jan 30 10:46:09 crc kubenswrapper[4984]: I0130 10:46:09.591782 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" probeResult="failure" output=< Jan 30 10:46:09 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:46:09 crc kubenswrapper[4984]: > Jan 30 10:46:18 crc kubenswrapper[4984]: I0130 10:46:18.624382 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:18 crc kubenswrapper[4984]: I0130 10:46:18.673806 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:18 crc kubenswrapper[4984]: I0130 10:46:18.860531 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:46:20 crc kubenswrapper[4984]: I0130 10:46:20.433985 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvmbl" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" containerID="cri-o://0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" gracePeriod=2 Jan 30 10:46:20 crc kubenswrapper[4984]: I0130 10:46:20.986823 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.048205 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") pod \"85f0471c-9b7e-4545-8550-08db9fa38fed\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.048311 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") pod \"85f0471c-9b7e-4545-8550-08db9fa38fed\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.048335 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") pod \"85f0471c-9b7e-4545-8550-08db9fa38fed\" (UID: \"85f0471c-9b7e-4545-8550-08db9fa38fed\") " Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.049368 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities" (OuterVolumeSpecName: "utilities") pod "85f0471c-9b7e-4545-8550-08db9fa38fed" (UID: "85f0471c-9b7e-4545-8550-08db9fa38fed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.059364 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56" (OuterVolumeSpecName: "kube-api-access-sbh56") pod "85f0471c-9b7e-4545-8550-08db9fa38fed" (UID: "85f0471c-9b7e-4545-8550-08db9fa38fed"). InnerVolumeSpecName "kube-api-access-sbh56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.150579 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbh56\" (UniqueName: \"kubernetes.io/projected/85f0471c-9b7e-4545-8550-08db9fa38fed-kube-api-access-sbh56\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.150613 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.165653 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85f0471c-9b7e-4545-8550-08db9fa38fed" (UID: "85f0471c-9b7e-4545-8550-08db9fa38fed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.251611 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f0471c-9b7e-4545-8550-08db9fa38fed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446414 4984 generic.go:334] "Generic (PLEG): container finished" podID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" exitCode=0 Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446796 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvmbl" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446848 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9"} Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446891 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvmbl" event={"ID":"85f0471c-9b7e-4545-8550-08db9fa38fed","Type":"ContainerDied","Data":"24a1148d15bee715c556a07badff1472f1bb6f79211e4948aa32e4f198a42f23"} Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.446919 4984 scope.go:117] "RemoveContainer" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.480944 4984 scope.go:117] "RemoveContainer" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.496206 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.502958 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvmbl"] Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.511507 4984 scope.go:117] "RemoveContainer" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.561863 4984 scope.go:117] "RemoveContainer" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" Jan 30 10:46:21 crc kubenswrapper[4984]: E0130 10:46:21.562464 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9\": container with ID starting with 0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9 not found: ID does not exist" containerID="0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.562504 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9"} err="failed to get container status \"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9\": rpc error: code = NotFound desc = could not find container \"0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9\": container with ID starting with 0d282bfb4d7a77a0a795ef34751f7b1c6ed7e4b3c38f3c9db1b68fbca8fc7de9 not found: ID does not exist" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.562557 4984 scope.go:117] "RemoveContainer" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" Jan 30 10:46:21 crc kubenswrapper[4984]: E0130 10:46:21.563293 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa\": container with ID starting with 42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa not found: ID does not exist" containerID="42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.563353 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa"} err="failed to get container status \"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa\": rpc error: code = NotFound desc = could not find container \"42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa\": container with ID starting with 42522cac2c4c280b86bc5eb1383467990168c17d39ed9cae4e620319e6c7fcaa not found: ID does not exist" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.563392 4984 scope.go:117] "RemoveContainer" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" Jan 30 10:46:21 crc kubenswrapper[4984]: E0130 10:46:21.563830 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f\": container with ID starting with fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f not found: ID does not exist" containerID="fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f" Jan 30 10:46:21 crc kubenswrapper[4984]: I0130 10:46:21.563864 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f"} err="failed to get container status \"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f\": rpc error: code = NotFound desc = could not find container \"fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f\": container with ID starting with fe83fde9f13077f986acf00a60e705f3cad5c3d22c6d1003e239deb21289b40f not found: ID does not exist" Jan 30 10:46:22 crc kubenswrapper[4984]: I0130 10:46:22.102659 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" path="/var/lib/kubelet/pods/85f0471c-9b7e-4545-8550-08db9fa38fed/volumes" Jan 30 10:46:47 crc kubenswrapper[4984]: I0130 10:46:47.735337 4984 generic.go:334] "Generic (PLEG): container finished" podID="2f986324-c570-4c65-aed1-952aa2538af8" containerID="3410426cef37882e6e66b81f19bf117fcd08a6957b5586575d68c7a3a2e02ae8" exitCode=0 Jan 30 10:46:47 crc kubenswrapper[4984]: I0130 10:46:47.735442 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerDied","Data":"3410426cef37882e6e66b81f19bf117fcd08a6957b5586575d68c7a3a2e02ae8"} Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.224372 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324756 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324871 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324893 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324951 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.324975 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") pod \"2f986324-c570-4c65-aed1-952aa2538af8\" (UID: \"2f986324-c570-4c65-aed1-952aa2538af8\") " Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.330805 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.330886 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69" (OuterVolumeSpecName: "kube-api-access-rkm69") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "kube-api-access-rkm69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.348773 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.351040 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory" (OuterVolumeSpecName: "inventory") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.355824 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f986324-c570-4c65-aed1-952aa2538af8" (UID: "2f986324-c570-4c65-aed1-952aa2538af8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426586 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkm69\" (UniqueName: \"kubernetes.io/projected/2f986324-c570-4c65-aed1-952aa2538af8-kube-api-access-rkm69\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426618 4984 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2f986324-c570-4c65-aed1-952aa2538af8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426629 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426639 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.426648 4984 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f986324-c570-4c65-aed1-952aa2538af8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.760633 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" event={"ID":"2f986324-c570-4c65-aed1-952aa2538af8","Type":"ContainerDied","Data":"5441f0d6bae31f22da1ba983066bd904726100824475ad7b233ba6ccd9255c43"} Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.760682 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5441f0d6bae31f22da1ba983066bd904726100824475ad7b233ba6ccd9255c43" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.760732 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-57hv6" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.858500 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp"] Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859393 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859424 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859444 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859454 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859469 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f986324-c570-4c65-aed1-952aa2538af8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859477 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f986324-c570-4c65-aed1-952aa2538af8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859490 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859498 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859512 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859520 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859537 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859544 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859569 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859578 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859595 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859603 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-utilities" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859622 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859630 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: E0130 10:46:49.859644 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859651 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="extract-content" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859898 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="144fba12-676d-457b-83f6-6195f089a240" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859929 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f986324-c570-4c65-aed1-952aa2538af8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859949 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="86db9413-efcb-4f87-8605-317f50fb468d" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.859969 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f0471c-9b7e-4545-8550-08db9fa38fed" containerName="registry-server" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.860941 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.863769 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.863927 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.865130 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.865396 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.865778 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.870416 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp"] Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.872357 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.934933 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935018 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935043 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935145 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935175 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:49 crc kubenswrapper[4984]: I0130 10:46:49.935229 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037021 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037086 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037159 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037216 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037285 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.037316 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.042284 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.043047 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.043260 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.044192 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.046183 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.061891 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.184783 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.706731 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp"] Jan 30 10:46:50 crc kubenswrapper[4984]: I0130 10:46:50.768045 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerStarted","Data":"26f2602681c8c492d0051c7255d16410438290d8d1dcf9880aa2a552444af96b"} Jan 30 10:46:51 crc kubenswrapper[4984]: I0130 10:46:51.781028 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerStarted","Data":"a2962b323fced6ad8eb01c4f67039d08724bb7e095562875d9745776cc23a5d0"} Jan 30 10:46:51 crc kubenswrapper[4984]: I0130 10:46:51.805202 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" podStartSLOduration=2.332331899 podStartE2EDuration="2.805181762s" podCreationTimestamp="2026-01-30 10:46:49 +0000 UTC" firstStartedPulling="2026-01-30 10:46:50.718623331 +0000 UTC m=+2115.284927155" lastFinishedPulling="2026-01-30 10:46:51.191473204 +0000 UTC m=+2115.757777018" observedRunningTime="2026-01-30 10:46:51.796759515 +0000 UTC m=+2116.363063339" watchObservedRunningTime="2026-01-30 10:46:51.805181762 +0000 UTC m=+2116.371485606" Jan 30 10:47:37 crc kubenswrapper[4984]: I0130 10:47:37.523094 4984 generic.go:334] "Generic (PLEG): container finished" podID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerID="a2962b323fced6ad8eb01c4f67039d08724bb7e095562875d9745776cc23a5d0" exitCode=0 Jan 30 10:47:37 crc kubenswrapper[4984]: I0130 10:47:37.523188 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerDied","Data":"a2962b323fced6ad8eb01c4f67039d08724bb7e095562875d9745776cc23a5d0"} Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.021944 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166730 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166808 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166926 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.175668 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.166946 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.181532 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.181712 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") pod \"4549607f-18ca-42e1-8c2b-b7d9793e2005\" (UID: \"4549607f-18ca-42e1-8c2b-b7d9793e2005\") " Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.182966 4984 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.187692 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7" (OuterVolumeSpecName: "kube-api-access-wxdg7") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "kube-api-access-wxdg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.206842 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.210581 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory" (OuterVolumeSpecName: "inventory") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.212414 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.228425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4549607f-18ca-42e1-8c2b-b7d9793e2005" (UID: "4549607f-18ca-42e1-8c2b-b7d9793e2005"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.284748 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.284970 4984 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.285079 4984 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.285144 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4549607f-18ca-42e1-8c2b-b7d9793e2005-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.285198 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdg7\" (UniqueName: \"kubernetes.io/projected/4549607f-18ca-42e1-8c2b-b7d9793e2005-kube-api-access-wxdg7\") on node \"crc\" DevicePath \"\"" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.554990 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" event={"ID":"4549607f-18ca-42e1-8c2b-b7d9793e2005","Type":"ContainerDied","Data":"26f2602681c8c492d0051c7255d16410438290d8d1dcf9880aa2a552444af96b"} Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.555049 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f2602681c8c492d0051c7255d16410438290d8d1dcf9880aa2a552444af96b" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.558421 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.733958 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm"] Jan 30 10:47:39 crc kubenswrapper[4984]: E0130 10:47:39.734345 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.734362 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.734520 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="4549607f-18ca-42e1-8c2b-b7d9793e2005" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.735097 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.737743 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.738019 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.738796 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.740538 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.752788 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.752806 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm"] Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804554 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804669 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804814 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.804849 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.805033 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.906983 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907039 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907087 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907117 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.907157 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.912990 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.913635 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.913728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.913808 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:39 crc kubenswrapper[4984]: I0130 10:47:39.931345 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:40 crc kubenswrapper[4984]: I0130 10:47:40.124699 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:47:40 crc kubenswrapper[4984]: I0130 10:47:40.675317 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm"] Jan 30 10:47:41 crc kubenswrapper[4984]: I0130 10:47:41.576583 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerStarted","Data":"e904d384dbc793ea32f3f7021ee588e47446adb38a277209a9e7e2205814ae72"} Jan 30 10:47:41 crc kubenswrapper[4984]: I0130 10:47:41.576921 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerStarted","Data":"19bcb4c8d3b2a671dd50be23d3162eaa64c2878736d76cdf8b28701a759f6bf0"} Jan 30 10:47:41 crc kubenswrapper[4984]: I0130 10:47:41.604337 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" podStartSLOduration=2.053288674 podStartE2EDuration="2.60431489s" podCreationTimestamp="2026-01-30 10:47:39 +0000 UTC" firstStartedPulling="2026-01-30 10:47:40.681543017 +0000 UTC m=+2165.247846841" lastFinishedPulling="2026-01-30 10:47:41.232569223 +0000 UTC m=+2165.798873057" observedRunningTime="2026-01-30 10:47:41.599594152 +0000 UTC m=+2166.165898006" watchObservedRunningTime="2026-01-30 10:47:41.60431489 +0000 UTC m=+2166.170618724" Jan 30 10:48:03 crc kubenswrapper[4984]: I0130 10:48:03.001200 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:48:03 crc kubenswrapper[4984]: I0130 10:48:03.001891 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.584629 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.588790 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.598422 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.703932 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.704002 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.704165 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.805265 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.805335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.805388 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.806108 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.806233 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.833426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"redhat-marketplace-fbxpr\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:15 crc kubenswrapper[4984]: I0130 10:48:15.911096 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.366278 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:16 crc kubenswrapper[4984]: W0130 10:48:16.372484 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7365f2b3_2916_4c3b_8ce8_d34b7b45bcbb.slice/crio-463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc WatchSource:0}: Error finding container 463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc: Status 404 returned error can't find the container with id 463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.968856 4984 generic.go:334] "Generic (PLEG): container finished" podID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" exitCode=0 Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.968928 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820"} Jan 30 10:48:16 crc kubenswrapper[4984]: I0130 10:48:16.969224 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerStarted","Data":"463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc"} Jan 30 10:48:17 crc kubenswrapper[4984]: I0130 10:48:17.983814 4984 generic.go:334] "Generic (PLEG): container finished" podID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" exitCode=0 Jan 30 10:48:17 crc kubenswrapper[4984]: I0130 10:48:17.983893 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc"} Jan 30 10:48:18 crc kubenswrapper[4984]: I0130 10:48:18.996376 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerStarted","Data":"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56"} Jan 30 10:48:19 crc kubenswrapper[4984]: I0130 10:48:19.025699 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbxpr" podStartSLOduration=2.582496624 podStartE2EDuration="4.025669924s" podCreationTimestamp="2026-01-30 10:48:15 +0000 UTC" firstStartedPulling="2026-01-30 10:48:16.972644346 +0000 UTC m=+2201.538948200" lastFinishedPulling="2026-01-30 10:48:18.415817636 +0000 UTC m=+2202.982121500" observedRunningTime="2026-01-30 10:48:19.01925161 +0000 UTC m=+2203.585555444" watchObservedRunningTime="2026-01-30 10:48:19.025669924 +0000 UTC m=+2203.591973758" Jan 30 10:48:25 crc kubenswrapper[4984]: I0130 10:48:25.911370 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:25 crc kubenswrapper[4984]: I0130 10:48:25.913200 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:25 crc kubenswrapper[4984]: I0130 10:48:25.962523 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:26 crc kubenswrapper[4984]: I0130 10:48:26.119687 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:26 crc kubenswrapper[4984]: I0130 10:48:26.194277 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.095827 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fbxpr" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" containerID="cri-o://0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" gracePeriod=2 Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.736043 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.877941 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") pod \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.878062 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") pod \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.878315 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") pod \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\" (UID: \"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb\") " Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.883665 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities" (OuterVolumeSpecName: "utilities") pod "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" (UID: "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.885045 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2" (OuterVolumeSpecName: "kube-api-access-h8rj2") pod "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" (UID: "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb"). InnerVolumeSpecName "kube-api-access-h8rj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.906733 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" (UID: "7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.980671 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.980706 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8rj2\" (UniqueName: \"kubernetes.io/projected/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-kube-api-access-h8rj2\") on node \"crc\" DevicePath \"\"" Jan 30 10:48:28 crc kubenswrapper[4984]: I0130 10:48:28.980719 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112634 4984 generic.go:334] "Generic (PLEG): container finished" podID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" exitCode=0 Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112700 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56"} Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112727 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbxpr" event={"ID":"7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb","Type":"ContainerDied","Data":"463bffac2c91fb5ad2181cbce8e2a7bdb72302ce0ce27faa1be9fe2d0e220bbc"} Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.112745 4984 scope.go:117] "RemoveContainer" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.113050 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbxpr" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.149778 4984 scope.go:117] "RemoveContainer" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.161616 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.170623 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbxpr"] Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.178698 4984 scope.go:117] "RemoveContainer" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.235864 4984 scope.go:117] "RemoveContainer" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" Jan 30 10:48:29 crc kubenswrapper[4984]: E0130 10:48:29.236475 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56\": container with ID starting with 0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56 not found: ID does not exist" containerID="0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.236528 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56"} err="failed to get container status \"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56\": rpc error: code = NotFound desc = could not find container \"0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56\": container with ID starting with 0953b1e746a46fd476d0b8f000b6837d8e037aea1258044358062fdd0617ab56 not found: ID does not exist" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.236562 4984 scope.go:117] "RemoveContainer" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" Jan 30 10:48:29 crc kubenswrapper[4984]: E0130 10:48:29.236993 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc\": container with ID starting with d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc not found: ID does not exist" containerID="d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.237039 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc"} err="failed to get container status \"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc\": rpc error: code = NotFound desc = could not find container \"d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc\": container with ID starting with d3c561d4bf6c60af87681cf4fe616eb206103672057a31d869499724d45c43dc not found: ID does not exist" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.237066 4984 scope.go:117] "RemoveContainer" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" Jan 30 10:48:29 crc kubenswrapper[4984]: E0130 10:48:29.237433 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820\": container with ID starting with 8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820 not found: ID does not exist" containerID="8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820" Jan 30 10:48:29 crc kubenswrapper[4984]: I0130 10:48:29.237474 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820"} err="failed to get container status \"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820\": rpc error: code = NotFound desc = could not find container \"8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820\": container with ID starting with 8921a31fda45169819b0e4047b7bf3cb871db4ee4a2c59fdc84059645aca1820 not found: ID does not exist" Jan 30 10:48:30 crc kubenswrapper[4984]: I0130 10:48:30.111807 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" path="/var/lib/kubelet/pods/7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb/volumes" Jan 30 10:48:33 crc kubenswrapper[4984]: I0130 10:48:33.000321 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:48:33 crc kubenswrapper[4984]: I0130 10:48:33.000650 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.001243 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.002039 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.002107 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.003044 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:49:03 crc kubenswrapper[4984]: I0130 10:49:03.003145 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" gracePeriod=600 Jan 30 10:49:04 crc kubenswrapper[4984]: E0130 10:49:04.251536 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.483410 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" exitCode=0 Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.483499 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988"} Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.483913 4984 scope.go:117] "RemoveContainer" containerID="d7b7e611951c8db2c88b62ddd76096a8061707b8c0f9d1013f4effa4c3ee8f1e" Jan 30 10:49:04 crc kubenswrapper[4984]: I0130 10:49:04.484612 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:04 crc kubenswrapper[4984]: E0130 10:49:04.485013 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:18 crc kubenswrapper[4984]: I0130 10:49:18.090528 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:18 crc kubenswrapper[4984]: E0130 10:49:18.091529 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:30 crc kubenswrapper[4984]: I0130 10:49:30.090407 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:30 crc kubenswrapper[4984]: E0130 10:49:30.092550 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:42 crc kubenswrapper[4984]: I0130 10:49:42.090641 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:42 crc kubenswrapper[4984]: E0130 10:49:42.092560 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:49:53 crc kubenswrapper[4984]: I0130 10:49:53.090430 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:49:53 crc kubenswrapper[4984]: E0130 10:49:53.091502 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:04 crc kubenswrapper[4984]: I0130 10:50:04.090805 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:04 crc kubenswrapper[4984]: E0130 10:50:04.091594 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:18 crc kubenswrapper[4984]: I0130 10:50:18.090926 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:18 crc kubenswrapper[4984]: E0130 10:50:18.091888 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:33 crc kubenswrapper[4984]: I0130 10:50:33.090570 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:33 crc kubenswrapper[4984]: E0130 10:50:33.091608 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:44 crc kubenswrapper[4984]: I0130 10:50:44.090389 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:44 crc kubenswrapper[4984]: E0130 10:50:44.091329 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:50:58 crc kubenswrapper[4984]: I0130 10:50:58.091392 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:50:58 crc kubenswrapper[4984]: E0130 10:50:58.092900 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:09 crc kubenswrapper[4984]: I0130 10:51:09.090861 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:09 crc kubenswrapper[4984]: E0130 10:51:09.092403 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:24 crc kubenswrapper[4984]: I0130 10:51:24.090446 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:24 crc kubenswrapper[4984]: E0130 10:51:24.093552 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:29 crc kubenswrapper[4984]: I0130 10:51:29.960391 4984 generic.go:334] "Generic (PLEG): container finished" podID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerID="e904d384dbc793ea32f3f7021ee588e47446adb38a277209a9e7e2205814ae72" exitCode=0 Jan 30 10:51:29 crc kubenswrapper[4984]: I0130 10:51:29.960494 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerDied","Data":"e904d384dbc793ea32f3f7021ee588e47446adb38a277209a9e7e2205814ae72"} Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.432690 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530278 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530352 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530502 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530538 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.530631 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.979679 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" event={"ID":"d3ca7cba-514d-4761-821d-9b48578f0cc3","Type":"ContainerDied","Data":"19bcb4c8d3b2a671dd50be23d3162eaa64c2878736d76cdf8b28701a759f6bf0"} Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.980138 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19bcb4c8d3b2a671dd50be23d3162eaa64c2878736d76cdf8b28701a759f6bf0" Jan 30 10:51:31 crc kubenswrapper[4984]: I0130 10:51:31.980292 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.117631 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm"] Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118183 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118204 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118212 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118219 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118232 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-content" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118238 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-content" Jan 30 10:51:32 crc kubenswrapper[4984]: E0130 10:51:32.118269 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-utilities" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118278 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="extract-utilities" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118502 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="7365f2b3-2916-4c3b-8ce8-d34b7b45bcbb" containerName="registry-server" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.118531 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ca7cba-514d-4761-821d-9b48578f0cc3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.119295 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm"] Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.119383 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.122435 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.122640 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.122951 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.248775 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.249900 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm" (OuterVolumeSpecName: "kube-api-access-jcvvm") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "kube-api-access-jcvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254170 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254380 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") pod \"d3ca7cba-514d-4761-821d-9b48578f0cc3\" (UID: \"d3ca7cba-514d-4761-821d-9b48578f0cc3\") " Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254726 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254791 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254831 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254875 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254946 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.254974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255010 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255063 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255143 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255212 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcvvm\" (UniqueName: \"kubernetes.io/projected/d3ca7cba-514d-4761-821d-9b48578f0cc3-kube-api-access-jcvvm\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255227 4984 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: W0130 10:51:32.255337 4984 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d3ca7cba-514d-4761-821d-9b48578f0cc3/volumes/kubernetes.io~secret/ssh-key-openstack-edpm-ipam Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.255347 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.258425 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory" (OuterVolumeSpecName: "inventory") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.258949 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d3ca7cba-514d-4761-821d-9b48578f0cc3" (UID: "d3ca7cba-514d-4761-821d-9b48578f0cc3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.356916 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.356976 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357033 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357053 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357081 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357121 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357177 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357199 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357232 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357314 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357327 4984 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.357335 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3ca7cba-514d-4761-821d-9b48578f0cc3-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.358888 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.361281 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.361544 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.362212 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.362373 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.362784 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.363444 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.364939 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.389057 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lrcvm\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:32 crc kubenswrapper[4984]: I0130 10:51:32.435669 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:51:33 crc kubenswrapper[4984]: I0130 10:51:33.007425 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm"] Jan 30 10:51:33 crc kubenswrapper[4984]: I0130 10:51:33.017706 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:51:34 crc kubenswrapper[4984]: I0130 10:51:34.001844 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerStarted","Data":"1e7b15a8490d181c1436d2dc6adcf4659c7b4edd12e93391658f7ee8d52a9a57"} Jan 30 10:51:34 crc kubenswrapper[4984]: I0130 10:51:34.002478 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerStarted","Data":"8832c5c57003890662a0f9615f11ddf277cbd88679f15ae2546fd3edafc4bdd9"} Jan 30 10:51:34 crc kubenswrapper[4984]: I0130 10:51:34.049553 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" podStartSLOduration=1.6454343649999998 podStartE2EDuration="2.049526306s" podCreationTimestamp="2026-01-30 10:51:32 +0000 UTC" firstStartedPulling="2026-01-30 10:51:33.017384394 +0000 UTC m=+2397.583688238" lastFinishedPulling="2026-01-30 10:51:33.421476355 +0000 UTC m=+2397.987780179" observedRunningTime="2026-01-30 10:51:34.030938313 +0000 UTC m=+2398.597242137" watchObservedRunningTime="2026-01-30 10:51:34.049526306 +0000 UTC m=+2398.615830160" Jan 30 10:51:38 crc kubenswrapper[4984]: I0130 10:51:38.091645 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:38 crc kubenswrapper[4984]: E0130 10:51:38.092557 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:51:49 crc kubenswrapper[4984]: I0130 10:51:49.091708 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:51:49 crc kubenswrapper[4984]: E0130 10:51:49.092885 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:03 crc kubenswrapper[4984]: I0130 10:52:03.090022 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:03 crc kubenswrapper[4984]: E0130 10:52:03.091168 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:17 crc kubenswrapper[4984]: I0130 10:52:17.091320 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:17 crc kubenswrapper[4984]: E0130 10:52:17.092235 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:23 crc kubenswrapper[4984]: I0130 10:52:23.810145 4984 scope.go:117] "RemoveContainer" containerID="de219b46efc681590dfc9f6c663921083e34944cc19d08e21c367c0cf53ca7e4" Jan 30 10:52:23 crc kubenswrapper[4984]: I0130 10:52:23.839918 4984 scope.go:117] "RemoveContainer" containerID="54816c0cf5b8eb3e710c434a7440b8072cfb0783b73c9b74be19869c4c444e35" Jan 30 10:52:23 crc kubenswrapper[4984]: I0130 10:52:23.882207 4984 scope.go:117] "RemoveContainer" containerID="ed431fc3a6db3fa0fe232867a59cbed137413d11b1a74f7c6cfa6f98d30e46d7" Jan 30 10:52:31 crc kubenswrapper[4984]: I0130 10:52:31.090397 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:31 crc kubenswrapper[4984]: E0130 10:52:31.091061 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:46 crc kubenswrapper[4984]: I0130 10:52:46.096429 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:46 crc kubenswrapper[4984]: E0130 10:52:46.097418 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:52:57 crc kubenswrapper[4984]: I0130 10:52:57.090579 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:52:57 crc kubenswrapper[4984]: E0130 10:52:57.091824 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:09 crc kubenswrapper[4984]: I0130 10:53:09.090214 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:09 crc kubenswrapper[4984]: E0130 10:53:09.091046 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:20 crc kubenswrapper[4984]: I0130 10:53:20.090315 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:20 crc kubenswrapper[4984]: E0130 10:53:20.091296 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:33 crc kubenswrapper[4984]: I0130 10:53:33.090699 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:33 crc kubenswrapper[4984]: E0130 10:53:33.091889 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:53:42 crc kubenswrapper[4984]: I0130 10:53:42.236407 4984 generic.go:334] "Generic (PLEG): container finished" podID="eaa18315-192f-412f-b94c-708c98209a5a" containerID="1e7b15a8490d181c1436d2dc6adcf4659c7b4edd12e93391658f7ee8d52a9a57" exitCode=0 Jan 30 10:53:42 crc kubenswrapper[4984]: I0130 10:53:42.236513 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerDied","Data":"1e7b15a8490d181c1436d2dc6adcf4659c7b4edd12e93391658f7ee8d52a9a57"} Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.748744 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883193 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883260 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883375 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883421 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883471 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883507 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883536 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883581 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.883640 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") pod \"eaa18315-192f-412f-b94c-708c98209a5a\" (UID: \"eaa18315-192f-412f-b94c-708c98209a5a\") " Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.889885 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.892141 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q" (OuterVolumeSpecName: "kube-api-access-5tj4q") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "kube-api-access-5tj4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.916128 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.917452 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.919436 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.919561 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.919964 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.922886 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.923477 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory" (OuterVolumeSpecName: "inventory") pod "eaa18315-192f-412f-b94c-708c98209a5a" (UID: "eaa18315-192f-412f-b94c-708c98209a5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986183 4984 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986220 4984 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/eaa18315-192f-412f-b94c-708c98209a5a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986286 4984 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986299 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986309 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tj4q\" (UniqueName: \"kubernetes.io/projected/eaa18315-192f-412f-b94c-708c98209a5a-kube-api-access-5tj4q\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986317 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986324 4984 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986332 4984 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:43 crc kubenswrapper[4984]: I0130 10:53:43.986360 4984 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa18315-192f-412f-b94c-708c98209a5a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.261082 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" event={"ID":"eaa18315-192f-412f-b94c-708c98209a5a","Type":"ContainerDied","Data":"8832c5c57003890662a0f9615f11ddf277cbd88679f15ae2546fd3edafc4bdd9"} Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.261116 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lrcvm" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.261121 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8832c5c57003890662a0f9615f11ddf277cbd88679f15ae2546fd3edafc4bdd9" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.358053 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf"] Jan 30 10:53:44 crc kubenswrapper[4984]: E0130 10:53:44.358515 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa18315-192f-412f-b94c-708c98209a5a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.358537 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa18315-192f-412f-b94c-708c98209a5a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.358767 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa18315-192f-412f-b94c-708c98209a5a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.359508 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367629 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367723 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367851 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.367963 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.369465 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t9l7t" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.378350 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf"] Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.504936 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505301 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505348 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505373 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505400 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505421 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.505441 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.606911 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.606972 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607208 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607266 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607304 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.607335 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.612728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.612848 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.613532 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.614992 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.615766 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.617224 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.624878 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-npmxf\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:44 crc kubenswrapper[4984]: I0130 10:53:44.681059 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:53:45 crc kubenswrapper[4984]: I0130 10:53:45.261646 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf"] Jan 30 10:53:46 crc kubenswrapper[4984]: I0130 10:53:46.281404 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerStarted","Data":"1ce026c7d9e7865804b002636285604d59f66b4508d2e8577f360f5cb9aa549b"} Jan 30 10:53:46 crc kubenswrapper[4984]: I0130 10:53:46.281667 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerStarted","Data":"49e3d33584fba571e1d643382a9e95c4634c1598a3ef19de78845ca7b6eae51f"} Jan 30 10:53:46 crc kubenswrapper[4984]: I0130 10:53:46.299178 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" podStartSLOduration=1.882130853 podStartE2EDuration="2.299160788s" podCreationTimestamp="2026-01-30 10:53:44 +0000 UTC" firstStartedPulling="2026-01-30 10:53:45.265853562 +0000 UTC m=+2529.832157416" lastFinishedPulling="2026-01-30 10:53:45.682883507 +0000 UTC m=+2530.249187351" observedRunningTime="2026-01-30 10:53:46.296973689 +0000 UTC m=+2530.863277523" watchObservedRunningTime="2026-01-30 10:53:46.299160788 +0000 UTC m=+2530.865464612" Jan 30 10:53:48 crc kubenswrapper[4984]: I0130 10:53:48.092218 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:53:48 crc kubenswrapper[4984]: E0130 10:53:48.093400 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:54:02 crc kubenswrapper[4984]: I0130 10:54:02.090308 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:54:02 crc kubenswrapper[4984]: E0130 10:54:02.091226 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 10:54:13 crc kubenswrapper[4984]: I0130 10:54:13.091001 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:54:13 crc kubenswrapper[4984]: I0130 10:54:13.521661 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206"} Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.303519 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.307759 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.327725 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.404158 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.404426 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.404507 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.506814 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.506910 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.507030 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.507416 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.507638 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.528388 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"redhat-operators-vpdjr\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:30 crc kubenswrapper[4984]: I0130 10:55:30.675514 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:31 crc kubenswrapper[4984]: I0130 10:55:31.131150 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:31 crc kubenswrapper[4984]: I0130 10:55:31.207643 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerStarted","Data":"c2fca6339fb0afef8e04fdf5437a00151a74afbf727cdbcb851c0730f8e653c5"} Jan 30 10:55:32 crc kubenswrapper[4984]: I0130 10:55:32.221301 4984 generic.go:334] "Generic (PLEG): container finished" podID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerID="9b33cd5d0cb8b2dfee6f54825b893734780a83e181b35cae89986d070ac3193d" exitCode=0 Jan 30 10:55:32 crc kubenswrapper[4984]: I0130 10:55:32.221448 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"9b33cd5d0cb8b2dfee6f54825b893734780a83e181b35cae89986d070ac3193d"} Jan 30 10:55:35 crc kubenswrapper[4984]: I0130 10:55:35.253644 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerStarted","Data":"587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add"} Jan 30 10:55:36 crc kubenswrapper[4984]: I0130 10:55:36.268408 4984 generic.go:334] "Generic (PLEG): container finished" podID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerID="587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add" exitCode=0 Jan 30 10:55:36 crc kubenswrapper[4984]: I0130 10:55:36.268479 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add"} Jan 30 10:55:38 crc kubenswrapper[4984]: I0130 10:55:38.291899 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerStarted","Data":"efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b"} Jan 30 10:55:38 crc kubenswrapper[4984]: I0130 10:55:38.341303 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpdjr" podStartSLOduration=3.394633307 podStartE2EDuration="8.341218458s" podCreationTimestamp="2026-01-30 10:55:30 +0000 UTC" firstStartedPulling="2026-01-30 10:55:32.223163923 +0000 UTC m=+2636.789467757" lastFinishedPulling="2026-01-30 10:55:37.169749044 +0000 UTC m=+2641.736052908" observedRunningTime="2026-01-30 10:55:38.335347939 +0000 UTC m=+2642.901651813" watchObservedRunningTime="2026-01-30 10:55:38.341218458 +0000 UTC m=+2642.907522312" Jan 30 10:55:40 crc kubenswrapper[4984]: I0130 10:55:40.676393 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:40 crc kubenswrapper[4984]: I0130 10:55:40.676991 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:41 crc kubenswrapper[4984]: I0130 10:55:41.725592 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vpdjr" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" probeResult="failure" output=< Jan 30 10:55:41 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 10:55:41 crc kubenswrapper[4984]: > Jan 30 10:55:50 crc kubenswrapper[4984]: I0130 10:55:50.734626 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:50 crc kubenswrapper[4984]: I0130 10:55:50.802555 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:50 crc kubenswrapper[4984]: I0130 10:55:50.984785 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:52 crc kubenswrapper[4984]: I0130 10:55:52.422171 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpdjr" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" containerID="cri-o://efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b" gracePeriod=2 Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.445110 4984 generic.go:334] "Generic (PLEG): container finished" podID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerID="efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b" exitCode=0 Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.445182 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b"} Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.809470 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.904080 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") pod \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.904357 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") pod \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.904424 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") pod \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\" (UID: \"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a\") " Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.905484 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities" (OuterVolumeSpecName: "utilities") pod "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" (UID: "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.906134 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:55:53 crc kubenswrapper[4984]: I0130 10:55:53.910287 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f" (OuterVolumeSpecName: "kube-api-access-lrb4f") pod "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" (UID: "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a"). InnerVolumeSpecName "kube-api-access-lrb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.008022 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrb4f\" (UniqueName: \"kubernetes.io/projected/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-kube-api-access-lrb4f\") on node \"crc\" DevicePath \"\"" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.035012 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" (UID: "61f4cad4-d9cd-4a1a-84c3-393a330c0b0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.110402 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.460478 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpdjr" event={"ID":"61f4cad4-d9cd-4a1a-84c3-393a330c0b0a","Type":"ContainerDied","Data":"c2fca6339fb0afef8e04fdf5437a00151a74afbf727cdbcb851c0730f8e653c5"} Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.460542 4984 scope.go:117] "RemoveContainer" containerID="efeb4faa651c470060b361fe804eecca01468f60fcbae006e5389cdfd3185e0b" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.460593 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpdjr" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.490428 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.499300 4984 scope.go:117] "RemoveContainer" containerID="587c0698cb0e88f277da1829fdf6755b9ad4182e6fb705b36e85e1de56d38add" Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.500458 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpdjr"] Jan 30 10:55:54 crc kubenswrapper[4984]: I0130 10:55:54.522610 4984 scope.go:117] "RemoveContainer" containerID="9b33cd5d0cb8b2dfee6f54825b893734780a83e181b35cae89986d070ac3193d" Jan 30 10:55:56 crc kubenswrapper[4984]: I0130 10:55:56.104786 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" path="/var/lib/kubelet/pods/61f4cad4-d9cd-4a1a-84c3-393a330c0b0a/volumes" Jan 30 10:56:03 crc kubenswrapper[4984]: I0130 10:56:03.541709 4984 generic.go:334] "Generic (PLEG): container finished" podID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerID="1ce026c7d9e7865804b002636285604d59f66b4508d2e8577f360f5cb9aa549b" exitCode=0 Jan 30 10:56:03 crc kubenswrapper[4984]: I0130 10:56:03.541770 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerDied","Data":"1ce026c7d9e7865804b002636285604d59f66b4508d2e8577f360f5cb9aa549b"} Jan 30 10:56:04 crc kubenswrapper[4984]: I0130 10:56:04.966541 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.033410 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.033533 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.033599 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.034827 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.035100 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.035235 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.035422 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") pod \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\" (UID: \"2498ca77-0e58-4af1-b59d-c19e6b11f2f9\") " Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.040461 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.050766 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g" (OuterVolumeSpecName: "kube-api-access-wlf9g") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "kube-api-access-wlf9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.065233 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.067908 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.068345 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory" (OuterVolumeSpecName: "inventory") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.070183 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.077772 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2498ca77-0e58-4af1-b59d-c19e6b11f2f9" (UID: "2498ca77-0e58-4af1-b59d-c19e6b11f2f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.137958 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138009 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138021 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138031 4984 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138039 4984 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138072 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlf9g\" (UniqueName: \"kubernetes.io/projected/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-kube-api-access-wlf9g\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.138081 4984 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2498ca77-0e58-4af1-b59d-c19e6b11f2f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.563206 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" event={"ID":"2498ca77-0e58-4af1-b59d-c19e6b11f2f9","Type":"ContainerDied","Data":"49e3d33584fba571e1d643382a9e95c4634c1598a3ef19de78845ca7b6eae51f"} Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.563265 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e3d33584fba571e1d643382a9e95c4634c1598a3ef19de78845ca7b6eae51f" Jan 30 10:56:05 crc kubenswrapper[4984]: I0130 10:56:05.563340 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-npmxf" Jan 30 10:56:33 crc kubenswrapper[4984]: I0130 10:56:33.001353 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:56:33 crc kubenswrapper[4984]: I0130 10:56:33.001924 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.708372 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709307 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-utilities" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709327 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-utilities" Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709344 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-content" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709352 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="extract-content" Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709440 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709453 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 10:56:58 crc kubenswrapper[4984]: E0130 10:56:58.709481 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709489 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709727 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f4cad4-d9cd-4a1a-84c3-393a330c0b0a" containerName="registry-server" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.709759 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2498ca77-0e58-4af1-b59d-c19e6b11f2f9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.710645 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.713382 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-68tn4" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.713881 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.714576 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.718700 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.721066 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.797764 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.797820 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.797984 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899278 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899330 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899352 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899404 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899455 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899554 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899581 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899606 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.899622 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.900837 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.902504 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:58 crc kubenswrapper[4984]: I0130 10:56:58.905896 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001328 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001394 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001415 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001486 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001515 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001563 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.001972 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.002285 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.002790 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.005674 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.011988 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.034745 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.042502 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.082168 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.514836 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.529471 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.897729 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.901594 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:56:59 crc kubenswrapper[4984]: I0130 10:56:59.914203 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.026621 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.026704 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.027614 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.126239 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerStarted","Data":"b6e20c129e5f1a30f1d5e8bbe28d03846430b2c36243a804176ef658d344f75a"} Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.129445 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.129521 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.129619 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.130029 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.130104 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.154083 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"certified-operators-wmx8m\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.297889 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:00 crc kubenswrapper[4984]: I0130 10:57:00.839181 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:00 crc kubenswrapper[4984]: W0130 10:57:00.851227 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3449a677_2462_4a6a_9855_f07157020548.slice/crio-7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb WatchSource:0}: Error finding container 7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb: Status 404 returned error can't find the container with id 7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb Jan 30 10:57:01 crc kubenswrapper[4984]: I0130 10:57:01.139909 4984 generic.go:334] "Generic (PLEG): container finished" podID="3449a677-2462-4a6a-9855-f07157020548" containerID="3bf5c8fde7d23cb63d4a5f2848e4706d5178b2204f74758b2c884ed0d6f76898" exitCode=0 Jan 30 10:57:01 crc kubenswrapper[4984]: I0130 10:57:01.140195 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"3bf5c8fde7d23cb63d4a5f2848e4706d5178b2204f74758b2c884ed0d6f76898"} Jan 30 10:57:01 crc kubenswrapper[4984]: I0130 10:57:01.140225 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerStarted","Data":"7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb"} Jan 30 10:57:03 crc kubenswrapper[4984]: I0130 10:57:03.000320 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:57:03 crc kubenswrapper[4984]: I0130 10:57:03.000969 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:57:06 crc kubenswrapper[4984]: I0130 10:57:06.194098 4984 generic.go:334] "Generic (PLEG): container finished" podID="3449a677-2462-4a6a-9855-f07157020548" containerID="b76e60f6701192f196bc34da85eb2ca09a381fee06a5897ddbdd060f1efd8391" exitCode=0 Jan 30 10:57:06 crc kubenswrapper[4984]: I0130 10:57:06.194173 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"b76e60f6701192f196bc34da85eb2ca09a381fee06a5897ddbdd060f1efd8391"} Jan 30 10:57:08 crc kubenswrapper[4984]: I0130 10:57:08.237640 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerStarted","Data":"d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a"} Jan 30 10:57:08 crc kubenswrapper[4984]: I0130 10:57:08.260229 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wmx8m" podStartSLOduration=3.470057882 podStartE2EDuration="9.260192657s" podCreationTimestamp="2026-01-30 10:56:59 +0000 UTC" firstStartedPulling="2026-01-30 10:57:01.141609723 +0000 UTC m=+2725.707913547" lastFinishedPulling="2026-01-30 10:57:06.931744498 +0000 UTC m=+2731.498048322" observedRunningTime="2026-01-30 10:57:08.254443382 +0000 UTC m=+2732.820747246" watchObservedRunningTime="2026-01-30 10:57:08.260192657 +0000 UTC m=+2732.826496481" Jan 30 10:57:10 crc kubenswrapper[4984]: I0130 10:57:10.298913 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:10 crc kubenswrapper[4984]: I0130 10:57:10.299020 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:10 crc kubenswrapper[4984]: I0130 10:57:10.360841 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:20 crc kubenswrapper[4984]: I0130 10:57:20.347706 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:20 crc kubenswrapper[4984]: I0130 10:57:20.401849 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:21 crc kubenswrapper[4984]: I0130 10:57:21.356382 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wmx8m" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" containerID="cri-o://d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" gracePeriod=2 Jan 30 10:57:22 crc kubenswrapper[4984]: I0130 10:57:22.367379 4984 generic.go:334] "Generic (PLEG): container finished" podID="3449a677-2462-4a6a-9855-f07157020548" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" exitCode=0 Jan 30 10:57:22 crc kubenswrapper[4984]: I0130 10:57:22.367466 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a"} Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.299464 4984 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.300350 4984 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.300991 4984 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 10:57:30 crc kubenswrapper[4984]: E0130 10:57:30.301070 4984 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-wmx8m" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.000513 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.001154 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.001208 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.002529 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.002713 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206" gracePeriod=600 Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.481836 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206" exitCode=0 Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.481903 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206"} Jan 30 10:57:33 crc kubenswrapper[4984]: I0130 10:57:33.481976 4984 scope.go:117] "RemoveContainer" containerID="a6f60e3f9e94b376723e0e628b819027e42f8f207213513b8229eebe1c379988" Jan 30 10:57:39 crc kubenswrapper[4984]: E0130 10:57:39.579918 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 30 10:57:39 crc kubenswrapper[4984]: E0130 10:57:39.582467 4984 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6mxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2281d2df-38c2-4c96-bff0-09cf745f1e50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 10:57:39 crc kubenswrapper[4984]: E0130 10:57:39.586219 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" Jan 30 10:57:39 crc kubenswrapper[4984]: I0130 10:57:39.998187 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.180461 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") pod \"3449a677-2462-4a6a-9855-f07157020548\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.180614 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") pod \"3449a677-2462-4a6a-9855-f07157020548\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.180771 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") pod \"3449a677-2462-4a6a-9855-f07157020548\" (UID: \"3449a677-2462-4a6a-9855-f07157020548\") " Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.182678 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities" (OuterVolumeSpecName: "utilities") pod "3449a677-2462-4a6a-9855-f07157020548" (UID: "3449a677-2462-4a6a-9855-f07157020548"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.192717 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd" (OuterVolumeSpecName: "kube-api-access-jjtvd") pod "3449a677-2462-4a6a-9855-f07157020548" (UID: "3449a677-2462-4a6a-9855-f07157020548"). InnerVolumeSpecName "kube-api-access-jjtvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.229702 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3449a677-2462-4a6a-9855-f07157020548" (UID: "3449a677-2462-4a6a-9855-f07157020548"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.283518 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjtvd\" (UniqueName: \"kubernetes.io/projected/3449a677-2462-4a6a-9855-f07157020548-kube-api-access-jjtvd\") on node \"crc\" DevicePath \"\"" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.283553 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.283563 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3449a677-2462-4a6a-9855-f07157020548-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.561824 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596"} Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.565132 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wmx8m" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.565310 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wmx8m" event={"ID":"3449a677-2462-4a6a-9855-f07157020548","Type":"ContainerDied","Data":"7d208285d4c414ebd0a5e676c6ed63a7aaaa477dba38ec65f96e3ed3ccf35ceb"} Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.565339 4984 scope.go:117] "RemoveContainer" containerID="d6e0fbc32ed9cc76a2d81bd1bfc41e7906ebe39feb9e41d5ac5186e26de5735a" Jan 30 10:57:40 crc kubenswrapper[4984]: E0130 10:57:40.566533 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.601030 4984 scope.go:117] "RemoveContainer" containerID="b76e60f6701192f196bc34da85eb2ca09a381fee06a5897ddbdd060f1efd8391" Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.638299 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.648101 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wmx8m"] Jan 30 10:57:40 crc kubenswrapper[4984]: I0130 10:57:40.656390 4984 scope.go:117] "RemoveContainer" containerID="3bf5c8fde7d23cb63d4a5f2848e4706d5178b2204f74758b2c884ed0d6f76898" Jan 30 10:57:42 crc kubenswrapper[4984]: I0130 10:57:42.100671 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3449a677-2462-4a6a-9855-f07157020548" path="/var/lib/kubelet/pods/3449a677-2462-4a6a-9855-f07157020548/volumes" Jan 30 10:57:56 crc kubenswrapper[4984]: I0130 10:57:56.607129 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 10:57:57 crc kubenswrapper[4984]: I0130 10:57:57.750851 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerStarted","Data":"37815ab6b9c63edd08166ccf65de1c616d66f60323976a741d216a64b5e3a4ee"} Jan 30 10:57:57 crc kubenswrapper[4984]: I0130 10:57:57.772084 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.6964289900000002 podStartE2EDuration="1m0.772062293s" podCreationTimestamp="2026-01-30 10:56:57 +0000 UTC" firstStartedPulling="2026-01-30 10:56:59.529090647 +0000 UTC m=+2724.095394491" lastFinishedPulling="2026-01-30 10:57:56.60472396 +0000 UTC m=+2781.171027794" observedRunningTime="2026-01-30 10:57:57.767721696 +0000 UTC m=+2782.334025530" watchObservedRunningTime="2026-01-30 10:57:57.772062293 +0000 UTC m=+2782.338366117" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.363725 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:33 crc kubenswrapper[4984]: E0130 10:59:33.364828 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.364846 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:59:33 crc kubenswrapper[4984]: E0130 10:59:33.364862 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-utilities" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.364869 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-utilities" Jan 30 10:59:33 crc kubenswrapper[4984]: E0130 10:59:33.364888 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-content" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.364895 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="extract-content" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.365136 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="3449a677-2462-4a6a-9855-f07157020548" containerName="registry-server" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.366793 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.376586 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.537899 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.538227 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.538535 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641126 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641210 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641475 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.641858 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.665499 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"redhat-marketplace-brx9l\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:33 crc kubenswrapper[4984]: I0130 10:59:33.686360 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.160201 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.675680 4984 generic.go:334] "Generic (PLEG): container finished" podID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" exitCode=0 Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.676029 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3"} Jan 30 10:59:34 crc kubenswrapper[4984]: I0130 10:59:34.676066 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerStarted","Data":"ea5d3b5f5673e849014555fd30ecbf70530ca246faf0657e13ceeeca759576d8"} Jan 30 10:59:35 crc kubenswrapper[4984]: I0130 10:59:35.723322 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerStarted","Data":"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636"} Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.734624 4984 generic.go:334] "Generic (PLEG): container finished" podID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" exitCode=0 Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.734814 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636"} Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.735089 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerStarted","Data":"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db"} Jan 30 10:59:36 crc kubenswrapper[4984]: I0130 10:59:36.765875 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brx9l" podStartSLOduration=2.312826724 podStartE2EDuration="3.765855576s" podCreationTimestamp="2026-01-30 10:59:33 +0000 UTC" firstStartedPulling="2026-01-30 10:59:34.678579388 +0000 UTC m=+2879.244883212" lastFinishedPulling="2026-01-30 10:59:36.13160824 +0000 UTC m=+2880.697912064" observedRunningTime="2026-01-30 10:59:36.760943333 +0000 UTC m=+2881.327247157" watchObservedRunningTime="2026-01-30 10:59:36.765855576 +0000 UTC m=+2881.332159400" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.816224 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.818857 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.830461 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.832794 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.832974 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.833112 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935151 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935329 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935379 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935850 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.935939 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:42 crc kubenswrapper[4984]: I0130 10:59:42.966404 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"community-operators-vzsl6\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.151415 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.686739 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.687156 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.726085 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.745589 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.801619 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerStarted","Data":"2e2e8333c78d01f65b23bf3fae9e7a0414dd58408924cf41e559b3886dd8d439"} Jan 30 10:59:43 crc kubenswrapper[4984]: I0130 10:59:43.864675 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:44 crc kubenswrapper[4984]: I0130 10:59:44.817217 4984 generic.go:334] "Generic (PLEG): container finished" podID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" exitCode=0 Jan 30 10:59:44 crc kubenswrapper[4984]: I0130 10:59:44.818402 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48"} Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.141722 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.142331 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-brx9l" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" containerID="cri-o://bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" gracePeriod=2 Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.628849 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.813294 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") pod \"99324cad-7c49-4be9-ad61-e9df70a2a954\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.813784 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") pod \"99324cad-7c49-4be9-ad61-e9df70a2a954\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.813986 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") pod \"99324cad-7c49-4be9-ad61-e9df70a2a954\" (UID: \"99324cad-7c49-4be9-ad61-e9df70a2a954\") " Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.814894 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities" (OuterVolumeSpecName: "utilities") pod "99324cad-7c49-4be9-ad61-e9df70a2a954" (UID: "99324cad-7c49-4be9-ad61-e9df70a2a954"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.826390 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7" (OuterVolumeSpecName: "kube-api-access-hfdt7") pod "99324cad-7c49-4be9-ad61-e9df70a2a954" (UID: "99324cad-7c49-4be9-ad61-e9df70a2a954"). InnerVolumeSpecName "kube-api-access-hfdt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.833461 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99324cad-7c49-4be9-ad61-e9df70a2a954" (UID: "99324cad-7c49-4be9-ad61-e9df70a2a954"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837509 4984 generic.go:334] "Generic (PLEG): container finished" podID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" exitCode=0 Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837564 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db"} Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837596 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brx9l" event={"ID":"99324cad-7c49-4be9-ad61-e9df70a2a954","Type":"ContainerDied","Data":"ea5d3b5f5673e849014555fd30ecbf70530ca246faf0657e13ceeeca759576d8"} Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837619 4984 scope.go:117] "RemoveContainer" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.837790 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brx9l" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.916537 4984 scope.go:117] "RemoveContainer" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.918239 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.918308 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfdt7\" (UniqueName: \"kubernetes.io/projected/99324cad-7c49-4be9-ad61-e9df70a2a954-kube-api-access-hfdt7\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.918322 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99324cad-7c49-4be9-ad61-e9df70a2a954-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.947732 4984 scope.go:117] "RemoveContainer" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.958424 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.969013 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-brx9l"] Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.986139 4984 scope.go:117] "RemoveContainer" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" Jan 30 10:59:46 crc kubenswrapper[4984]: E0130 10:59:46.987675 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db\": container with ID starting with bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db not found: ID does not exist" containerID="bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.987723 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db"} err="failed to get container status \"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db\": rpc error: code = NotFound desc = could not find container \"bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db\": container with ID starting with bde201118a8d59fb720712e4df5992ffdeda5236d603b9be4bc5878d2446d7db not found: ID does not exist" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.987749 4984 scope.go:117] "RemoveContainer" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" Jan 30 10:59:46 crc kubenswrapper[4984]: E0130 10:59:46.991813 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636\": container with ID starting with 5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636 not found: ID does not exist" containerID="5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.991869 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636"} err="failed to get container status \"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636\": rpc error: code = NotFound desc = could not find container \"5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636\": container with ID starting with 5f848b2cf679d36f4e6822279351cfce24a4d3ae101dec813eb6ef075cbe6636 not found: ID does not exist" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.991904 4984 scope.go:117] "RemoveContainer" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" Jan 30 10:59:46 crc kubenswrapper[4984]: E0130 10:59:46.992508 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3\": container with ID starting with 9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3 not found: ID does not exist" containerID="9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3" Jan 30 10:59:46 crc kubenswrapper[4984]: I0130 10:59:46.992539 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3"} err="failed to get container status \"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3\": rpc error: code = NotFound desc = could not find container \"9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3\": container with ID starting with 9ab2b2844660566780e860f14af265bdf7f4013930629269494ad08f79dc8de3 not found: ID does not exist" Jan 30 10:59:47 crc kubenswrapper[4984]: I0130 10:59:47.851044 4984 generic.go:334] "Generic (PLEG): container finished" podID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" exitCode=0 Jan 30 10:59:47 crc kubenswrapper[4984]: I0130 10:59:47.851099 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e"} Jan 30 10:59:48 crc kubenswrapper[4984]: I0130 10:59:48.104717 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" path="/var/lib/kubelet/pods/99324cad-7c49-4be9-ad61-e9df70a2a954/volumes" Jan 30 10:59:48 crc kubenswrapper[4984]: I0130 10:59:48.864137 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerStarted","Data":"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5"} Jan 30 10:59:48 crc kubenswrapper[4984]: I0130 10:59:48.888525 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vzsl6" podStartSLOduration=4.237286675 podStartE2EDuration="6.888503393s" podCreationTimestamp="2026-01-30 10:59:42 +0000 UTC" firstStartedPulling="2026-01-30 10:59:45.826383206 +0000 UTC m=+2890.392687030" lastFinishedPulling="2026-01-30 10:59:48.477599914 +0000 UTC m=+2893.043903748" observedRunningTime="2026-01-30 10:59:48.880089506 +0000 UTC m=+2893.446393350" watchObservedRunningTime="2026-01-30 10:59:48.888503393 +0000 UTC m=+2893.454807217" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.151730 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.152383 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.209997 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.946379 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:53 crc kubenswrapper[4984]: I0130 10:59:53.993859 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:55 crc kubenswrapper[4984]: I0130 10:59:55.926885 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vzsl6" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" containerID="cri-o://c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" gracePeriod=2 Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.390463 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.513700 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") pod \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.513820 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") pod \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.513915 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") pod \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\" (UID: \"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456\") " Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.514729 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities" (OuterVolumeSpecName: "utilities") pod "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" (UID: "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.519490 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s" (OuterVolumeSpecName: "kube-api-access-s5d9s") pod "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" (UID: "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456"). InnerVolumeSpecName "kube-api-access-s5d9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.573679 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" (UID: "12ddacea-bb3d-42d0-b0f9-0ab98c2a1456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.616464 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5d9s\" (UniqueName: \"kubernetes.io/projected/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-kube-api-access-s5d9s\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.616505 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.616516 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940822 4984 generic.go:334] "Generic (PLEG): container finished" podID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" exitCode=0 Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940896 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5"} Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940922 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vzsl6" event={"ID":"12ddacea-bb3d-42d0-b0f9-0ab98c2a1456","Type":"ContainerDied","Data":"2e2e8333c78d01f65b23bf3fae9e7a0414dd58408924cf41e559b3886dd8d439"} Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.940939 4984 scope.go:117] "RemoveContainer" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.941091 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vzsl6" Jan 30 10:59:56 crc kubenswrapper[4984]: I0130 10:59:56.986522 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.003863 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vzsl6"] Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.004924 4984 scope.go:117] "RemoveContainer" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.027013 4984 scope.go:117] "RemoveContainer" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.067761 4984 scope.go:117] "RemoveContainer" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" Jan 30 10:59:57 crc kubenswrapper[4984]: E0130 10:59:57.068203 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5\": container with ID starting with c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5 not found: ID does not exist" containerID="c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068232 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5"} err="failed to get container status \"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5\": rpc error: code = NotFound desc = could not find container \"c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5\": container with ID starting with c35db0ad30be67e3d47f88ac88a3e751d412606cab367bfdbd2b066326b717f5 not found: ID does not exist" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068274 4984 scope.go:117] "RemoveContainer" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" Jan 30 10:59:57 crc kubenswrapper[4984]: E0130 10:59:57.068489 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e\": container with ID starting with 2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e not found: ID does not exist" containerID="2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068596 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e"} err="failed to get container status \"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e\": rpc error: code = NotFound desc = could not find container \"2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e\": container with ID starting with 2d4d37ec5f1e2e061bf8110375af5120da25d0a2186f04dac763c50f7d44096e not found: ID does not exist" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.068669 4984 scope.go:117] "RemoveContainer" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" Jan 30 10:59:57 crc kubenswrapper[4984]: E0130 10:59:57.069005 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48\": container with ID starting with 04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48 not found: ID does not exist" containerID="04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48" Jan 30 10:59:57 crc kubenswrapper[4984]: I0130 10:59:57.069022 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48"} err="failed to get container status \"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48\": rpc error: code = NotFound desc = could not find container \"04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48\": container with ID starting with 04b328e11e1ea0c49974748679890422267473948c7e5f0c12a6d5026202ec48 not found: ID does not exist" Jan 30 10:59:58 crc kubenswrapper[4984]: I0130 10:59:58.102344 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" path="/var/lib/kubelet/pods/12ddacea-bb3d-42d0-b0f9-0ab98c2a1456/volumes" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.147385 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll"] Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148185 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148203 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148228 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148237 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148263 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148271 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148281 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148288 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148305 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148312 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="extract-content" Jan 30 11:00:00 crc kubenswrapper[4984]: E0130 11:00:00.148331 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148338 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="extract-utilities" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148556 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="99324cad-7c49-4be9-ad61-e9df70a2a954" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.148592 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ddacea-bb3d-42d0-b0f9-0ab98c2a1456" containerName="registry-server" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.149323 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.152721 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.152816 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.161191 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll"] Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.290540 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.290603 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.290633 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.392131 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.392203 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.392228 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.393405 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.400144 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.409426 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"collect-profiles-29496180-qjpll\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.471478 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.922574 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll"] Jan 30 11:00:00 crc kubenswrapper[4984]: I0130 11:00:00.979294 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" event={"ID":"f298f764-3b24-4f9e-91a8-3f20d3a73f2b","Type":"ContainerStarted","Data":"5121b9eaad056f4b4a78fc124df250e9f4a872cb7101db2ea1e2c8e3ca41fdc8"} Jan 30 11:00:01 crc kubenswrapper[4984]: I0130 11:00:01.989032 4984 generic.go:334] "Generic (PLEG): container finished" podID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerID="401d0865ac4589550574009d624842e59780a90d3f55b79cf9e51e5b49483b0a" exitCode=0 Jan 30 11:00:01 crc kubenswrapper[4984]: I0130 11:00:01.989117 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" event={"ID":"f298f764-3b24-4f9e-91a8-3f20d3a73f2b","Type":"ContainerDied","Data":"401d0865ac4589550574009d624842e59780a90d3f55b79cf9e51e5b49483b0a"} Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.000975 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.001205 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.322129 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.480618 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") pod \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.480735 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") pod \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.480824 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") pod \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\" (UID: \"f298f764-3b24-4f9e-91a8-3f20d3a73f2b\") " Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.482472 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "f298f764-3b24-4f9e-91a8-3f20d3a73f2b" (UID: "f298f764-3b24-4f9e-91a8-3f20d3a73f2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.488597 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9" (OuterVolumeSpecName: "kube-api-access-xsjq9") pod "f298f764-3b24-4f9e-91a8-3f20d3a73f2b" (UID: "f298f764-3b24-4f9e-91a8-3f20d3a73f2b"). InnerVolumeSpecName "kube-api-access-xsjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.492376 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f298f764-3b24-4f9e-91a8-3f20d3a73f2b" (UID: "f298f764-3b24-4f9e-91a8-3f20d3a73f2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.583486 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.583515 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:00:03 crc kubenswrapper[4984]: I0130 11:00:03.583526 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjq9\" (UniqueName: \"kubernetes.io/projected/f298f764-3b24-4f9e-91a8-3f20d3a73f2b-kube-api-access-xsjq9\") on node \"crc\" DevicePath \"\"" Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.006432 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" event={"ID":"f298f764-3b24-4f9e-91a8-3f20d3a73f2b","Type":"ContainerDied","Data":"5121b9eaad056f4b4a78fc124df250e9f4a872cb7101db2ea1e2c8e3ca41fdc8"} Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.006480 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5121b9eaad056f4b4a78fc124df250e9f4a872cb7101db2ea1e2c8e3ca41fdc8" Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.006537 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496180-qjpll" Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.397738 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 11:00:04 crc kubenswrapper[4984]: I0130 11:00:04.408377 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496135-d89wg"] Jan 30 11:00:06 crc kubenswrapper[4984]: I0130 11:00:06.104237 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5144eb3-3db1-4164-9dc1-51afa4ca6ac9" path="/var/lib/kubelet/pods/c5144eb3-3db1-4164-9dc1-51afa4ca6ac9/volumes" Jan 30 11:00:24 crc kubenswrapper[4984]: I0130 11:00:24.113322 4984 scope.go:117] "RemoveContainer" containerID="626343e1690b32284633537d7a0abbeeacd79d429e95b363b4efee829760178b" Jan 30 11:00:33 crc kubenswrapper[4984]: I0130 11:00:33.000212 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:00:33 crc kubenswrapper[4984]: I0130 11:00:33.000676 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.160691 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496181-vtxhk"] Jan 30 11:01:00 crc kubenswrapper[4984]: E0130 11:01:00.161788 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerName="collect-profiles" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.161809 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerName="collect-profiles" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.162101 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="f298f764-3b24-4f9e-91a8-3f20d3a73f2b" containerName="collect-profiles" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.162937 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.200520 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496181-vtxhk"] Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236033 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236135 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236160 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.236333 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.338400 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.338711 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.338997 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.339156 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.348090 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.352283 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.352799 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.357047 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"keystone-cron-29496181-vtxhk\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:00 crc kubenswrapper[4984]: I0130 11:01:00.491728 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:00.999941 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496181-vtxhk"] Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:01.521001 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerStarted","Data":"25870fe2cac41ccc743d8289c89046006ae4e9f58b3b6e0dcc1b355bd344b714"} Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:01.521332 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerStarted","Data":"43c0c25f85975483133d041e796d36a8932fe77b7b089ce8bfe96ba54edf4d05"} Jan 30 11:01:01 crc kubenswrapper[4984]: I0130 11:01:01.538437 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496181-vtxhk" podStartSLOduration=1.538418481 podStartE2EDuration="1.538418481s" podCreationTimestamp="2026-01-30 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 11:01:01.534812223 +0000 UTC m=+2966.101116047" watchObservedRunningTime="2026-01-30 11:01:01.538418481 +0000 UTC m=+2966.104722315" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.000756 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.001171 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.001278 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.002175 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.002281 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" gracePeriod=600 Jan 30 11:01:03 crc kubenswrapper[4984]: E0130 11:01:03.130861 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.545127 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" exitCode=0 Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.545608 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596"} Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.545862 4984 scope.go:117] "RemoveContainer" containerID="5d67008e44b6404f61720801249026149e17f64ff3598c59c608da86f6227206" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.546533 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:03 crc kubenswrapper[4984]: E0130 11:01:03.546842 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.549200 4984 generic.go:334] "Generic (PLEG): container finished" podID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerID="25870fe2cac41ccc743d8289c89046006ae4e9f58b3b6e0dcc1b355bd344b714" exitCode=0 Jan 30 11:01:03 crc kubenswrapper[4984]: I0130 11:01:03.549264 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerDied","Data":"25870fe2cac41ccc743d8289c89046006ae4e9f58b3b6e0dcc1b355bd344b714"} Jan 30 11:01:04 crc kubenswrapper[4984]: I0130 11:01:04.947586 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041198 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041535 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041631 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.041722 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") pod \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\" (UID: \"a5d9b60c-98e5-4132-9193-0b13ac2893a5\") " Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.052089 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.052151 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp" (OuterVolumeSpecName: "kube-api-access-nhwjp") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "kube-api-access-nhwjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.076328 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.105071 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data" (OuterVolumeSpecName: "config-data") pod "a5d9b60c-98e5-4132-9193-0b13ac2893a5" (UID: "a5d9b60c-98e5-4132-9193-0b13ac2893a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144286 4984 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144321 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhwjp\" (UniqueName: \"kubernetes.io/projected/a5d9b60c-98e5-4132-9193-0b13ac2893a5-kube-api-access-nhwjp\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144335 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.144345 4984 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5d9b60c-98e5-4132-9193-0b13ac2893a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.576597 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496181-vtxhk" event={"ID":"a5d9b60c-98e5-4132-9193-0b13ac2893a5","Type":"ContainerDied","Data":"43c0c25f85975483133d041e796d36a8932fe77b7b089ce8bfe96ba54edf4d05"} Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.576639 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496181-vtxhk" Jan 30 11:01:05 crc kubenswrapper[4984]: I0130 11:01:05.576653 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c0c25f85975483133d041e796d36a8932fe77b7b089ce8bfe96ba54edf4d05" Jan 30 11:01:17 crc kubenswrapper[4984]: I0130 11:01:17.089997 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:17 crc kubenswrapper[4984]: E0130 11:01:17.090791 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:29 crc kubenswrapper[4984]: I0130 11:01:29.090205 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:29 crc kubenswrapper[4984]: E0130 11:01:29.091209 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:44 crc kubenswrapper[4984]: I0130 11:01:44.091764 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:44 crc kubenswrapper[4984]: E0130 11:01:44.092519 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:01:58 crc kubenswrapper[4984]: I0130 11:01:58.091819 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:01:58 crc kubenswrapper[4984]: E0130 11:01:58.092625 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:10 crc kubenswrapper[4984]: I0130 11:02:10.091269 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:10 crc kubenswrapper[4984]: E0130 11:02:10.092006 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:22 crc kubenswrapper[4984]: I0130 11:02:22.090947 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:22 crc kubenswrapper[4984]: E0130 11:02:22.091740 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:36 crc kubenswrapper[4984]: I0130 11:02:36.098848 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:36 crc kubenswrapper[4984]: E0130 11:02:36.099853 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:48 crc kubenswrapper[4984]: I0130 11:02:48.094784 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:48 crc kubenswrapper[4984]: E0130 11:02:48.095409 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:02:59 crc kubenswrapper[4984]: I0130 11:02:59.090519 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:02:59 crc kubenswrapper[4984]: E0130 11:02:59.091206 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:10 crc kubenswrapper[4984]: I0130 11:03:10.090748 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:10 crc kubenswrapper[4984]: E0130 11:03:10.091580 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:24 crc kubenswrapper[4984]: I0130 11:03:24.091474 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:24 crc kubenswrapper[4984]: E0130 11:03:24.092398 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:37 crc kubenswrapper[4984]: I0130 11:03:37.091131 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:37 crc kubenswrapper[4984]: E0130 11:03:37.091946 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:03:52 crc kubenswrapper[4984]: I0130 11:03:52.090916 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:03:52 crc kubenswrapper[4984]: E0130 11:03:52.091494 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:06 crc kubenswrapper[4984]: I0130 11:04:06.096321 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:06 crc kubenswrapper[4984]: E0130 11:04:06.097116 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:17 crc kubenswrapper[4984]: I0130 11:04:17.090688 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:17 crc kubenswrapper[4984]: E0130 11:04:17.091488 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:32 crc kubenswrapper[4984]: I0130 11:04:32.090142 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:32 crc kubenswrapper[4984]: E0130 11:04:32.091774 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:46 crc kubenswrapper[4984]: I0130 11:04:46.099058 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:46 crc kubenswrapper[4984]: E0130 11:04:46.100677 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:04:59 crc kubenswrapper[4984]: I0130 11:04:59.090450 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:04:59 crc kubenswrapper[4984]: E0130 11:04:59.091059 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:11 crc kubenswrapper[4984]: I0130 11:05:11.090972 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:11 crc kubenswrapper[4984]: E0130 11:05:11.092039 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:22 crc kubenswrapper[4984]: I0130 11:05:22.091936 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:22 crc kubenswrapper[4984]: E0130 11:05:22.092968 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:35 crc kubenswrapper[4984]: I0130 11:05:35.090322 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:35 crc kubenswrapper[4984]: E0130 11:05:35.092258 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:05:48 crc kubenswrapper[4984]: I0130 11:05:48.093704 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:05:48 crc kubenswrapper[4984]: E0130 11:05:48.094729 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:06:00 crc kubenswrapper[4984]: I0130 11:06:00.090981 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:06:00 crc kubenswrapper[4984]: E0130 11:06:00.091807 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:06:11 crc kubenswrapper[4984]: I0130 11:06:11.091100 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:06:11 crc kubenswrapper[4984]: I0130 11:06:11.633758 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af"} Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.649552 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:06:59 crc kubenswrapper[4984]: E0130 11:06:59.653188 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerName="keystone-cron" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.653267 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerName="keystone-cron" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.653583 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d9b60c-98e5-4132-9193-0b13ac2893a5" containerName="keystone-cron" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.655543 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.662122 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.736412 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.736597 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.736653 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.838660 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.838770 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.838827 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.839191 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.839467 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.863268 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"redhat-operators-2ngm6\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:06:59 crc kubenswrapper[4984]: I0130 11:06:59.990380 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:00 crc kubenswrapper[4984]: I0130 11:07:00.457896 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.154767 4984 generic.go:334] "Generic (PLEG): container finished" podID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" exitCode=0 Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.154829 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5"} Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.155216 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerStarted","Data":"f64c210d3c51f28fad42d4986b14441accf489347c81125876407ded779a91f9"} Jan 30 11:07:01 crc kubenswrapper[4984]: I0130 11:07:01.157173 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 11:07:02 crc kubenswrapper[4984]: I0130 11:07:02.172973 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerStarted","Data":"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8"} Jan 30 11:07:04 crc kubenswrapper[4984]: I0130 11:07:04.203188 4984 generic.go:334] "Generic (PLEG): container finished" podID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" exitCode=0 Jan 30 11:07:04 crc kubenswrapper[4984]: I0130 11:07:04.204431 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8"} Jan 30 11:07:06 crc kubenswrapper[4984]: I0130 11:07:06.230261 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerStarted","Data":"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307"} Jan 30 11:07:06 crc kubenswrapper[4984]: I0130 11:07:06.256729 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2ngm6" podStartSLOduration=3.340874978 podStartE2EDuration="7.256702308s" podCreationTimestamp="2026-01-30 11:06:59 +0000 UTC" firstStartedPulling="2026-01-30 11:07:01.156961241 +0000 UTC m=+3325.723265065" lastFinishedPulling="2026-01-30 11:07:05.072788571 +0000 UTC m=+3329.639092395" observedRunningTime="2026-01-30 11:07:06.248183358 +0000 UTC m=+3330.814487182" watchObservedRunningTime="2026-01-30 11:07:06.256702308 +0000 UTC m=+3330.823006142" Jan 30 11:07:09 crc kubenswrapper[4984]: I0130 11:07:09.991004 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:09 crc kubenswrapper[4984]: I0130 11:07:09.992800 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:11 crc kubenswrapper[4984]: I0130 11:07:11.076245 4984 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2ngm6" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" probeResult="failure" output=< Jan 30 11:07:11 crc kubenswrapper[4984]: timeout: failed to connect service ":50051" within 1s Jan 30 11:07:11 crc kubenswrapper[4984]: > Jan 30 11:07:20 crc kubenswrapper[4984]: I0130 11:07:20.074638 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:20 crc kubenswrapper[4984]: I0130 11:07:20.154644 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:20 crc kubenswrapper[4984]: I0130 11:07:20.315052 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:21 crc kubenswrapper[4984]: I0130 11:07:21.398243 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2ngm6" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" containerID="cri-o://b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" gracePeriod=2 Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.083945 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.226003 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") pod \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.226651 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") pod \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.226713 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") pod \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\" (UID: \"b3702dd8-6210-4f96-a5de-eeabe7c42deb\") " Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.227574 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities" (OuterVolumeSpecName: "utilities") pod "b3702dd8-6210-4f96-a5de-eeabe7c42deb" (UID: "b3702dd8-6210-4f96-a5de-eeabe7c42deb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.231374 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn" (OuterVolumeSpecName: "kube-api-access-g9znn") pod "b3702dd8-6210-4f96-a5de-eeabe7c42deb" (UID: "b3702dd8-6210-4f96-a5de-eeabe7c42deb"). InnerVolumeSpecName "kube-api-access-g9znn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.329592 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9znn\" (UniqueName: \"kubernetes.io/projected/b3702dd8-6210-4f96-a5de-eeabe7c42deb-kube-api-access-g9znn\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.329622 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.372875 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3702dd8-6210-4f96-a5de-eeabe7c42deb" (UID: "b3702dd8-6210-4f96-a5de-eeabe7c42deb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408353 4984 generic.go:334] "Generic (PLEG): container finished" podID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" exitCode=0 Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408391 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307"} Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ngm6" event={"ID":"b3702dd8-6210-4f96-a5de-eeabe7c42deb","Type":"ContainerDied","Data":"f64c210d3c51f28fad42d4986b14441accf489347c81125876407ded779a91f9"} Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408431 4984 scope.go:117] "RemoveContainer" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.408533 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ngm6" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.431710 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3702dd8-6210-4f96-a5de-eeabe7c42deb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.441023 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.446411 4984 scope.go:117] "RemoveContainer" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.453076 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2ngm6"] Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.474748 4984 scope.go:117] "RemoveContainer" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.522987 4984 scope.go:117] "RemoveContainer" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" Jan 30 11:07:22 crc kubenswrapper[4984]: E0130 11:07:22.523502 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307\": container with ID starting with b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307 not found: ID does not exist" containerID="b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.523549 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307"} err="failed to get container status \"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307\": rpc error: code = NotFound desc = could not find container \"b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307\": container with ID starting with b5d4b6c8cde21dd368a03fddf927e258dad91dcd0e70fa368252646687680307 not found: ID does not exist" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.523577 4984 scope.go:117] "RemoveContainer" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" Jan 30 11:07:22 crc kubenswrapper[4984]: E0130 11:07:22.524038 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8\": container with ID starting with e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8 not found: ID does not exist" containerID="e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.524073 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8"} err="failed to get container status \"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8\": rpc error: code = NotFound desc = could not find container \"e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8\": container with ID starting with e0e14460565ae73ba22c2dccbaf29a8b53ac576cbbd2a84160dfa4994219aac8 not found: ID does not exist" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.524095 4984 scope.go:117] "RemoveContainer" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" Jan 30 11:07:22 crc kubenswrapper[4984]: E0130 11:07:22.524539 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5\": container with ID starting with 0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5 not found: ID does not exist" containerID="0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5" Jan 30 11:07:22 crc kubenswrapper[4984]: I0130 11:07:22.524568 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5"} err="failed to get container status \"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5\": rpc error: code = NotFound desc = could not find container \"0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5\": container with ID starting with 0e533105fa0255b3541f11582197ca0533fd0e2f9a1952e17d2f250eb83f77a5 not found: ID does not exist" Jan 30 11:07:24 crc kubenswrapper[4984]: I0130 11:07:24.106963 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" path="/var/lib/kubelet/pods/b3702dd8-6210-4f96-a5de-eeabe7c42deb/volumes" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.249860 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:28 crc kubenswrapper[4984]: E0130 11:07:28.250822 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.250838 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" Jan 30 11:07:28 crc kubenswrapper[4984]: E0130 11:07:28.250867 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-content" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.250875 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-content" Jan 30 11:07:28 crc kubenswrapper[4984]: E0130 11:07:28.250887 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-utilities" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.250894 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="extract-utilities" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.251061 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3702dd8-6210-4f96-a5de-eeabe7c42deb" containerName="registry-server" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.252519 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.260452 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.361549 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.361609 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.361683 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.462983 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463153 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463183 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463764 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.463824 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.487126 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"certified-operators-m822f\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:28 crc kubenswrapper[4984]: I0130 11:07:28.582603 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.025192 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.514665 4984 generic.go:334] "Generic (PLEG): container finished" podID="56a2e96e-1695-43f9-b487-f79599171463" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" exitCode=0 Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.514720 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab"} Jan 30 11:07:29 crc kubenswrapper[4984]: I0130 11:07:29.514763 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerStarted","Data":"8784493f0ec0a5cbfd98cba95b8ac016872ae00f15be3a0f8f39b96f70990a0b"} Jan 30 11:07:30 crc kubenswrapper[4984]: I0130 11:07:30.529927 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerStarted","Data":"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf"} Jan 30 11:07:31 crc kubenswrapper[4984]: I0130 11:07:31.541970 4984 generic.go:334] "Generic (PLEG): container finished" podID="56a2e96e-1695-43f9-b487-f79599171463" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" exitCode=0 Jan 30 11:07:31 crc kubenswrapper[4984]: I0130 11:07:31.542072 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf"} Jan 30 11:07:32 crc kubenswrapper[4984]: I0130 11:07:32.554316 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerStarted","Data":"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6"} Jan 30 11:07:32 crc kubenswrapper[4984]: I0130 11:07:32.584936 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m822f" podStartSLOduration=2.151622542 podStartE2EDuration="4.584907853s" podCreationTimestamp="2026-01-30 11:07:28 +0000 UTC" firstStartedPulling="2026-01-30 11:07:29.524933246 +0000 UTC m=+3354.091237070" lastFinishedPulling="2026-01-30 11:07:31.958218547 +0000 UTC m=+3356.524522381" observedRunningTime="2026-01-30 11:07:32.580969737 +0000 UTC m=+3357.147273561" watchObservedRunningTime="2026-01-30 11:07:32.584907853 +0000 UTC m=+3357.151211697" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.584437 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.585291 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.660527 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.748822 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:38 crc kubenswrapper[4984]: I0130 11:07:38.907391 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:40 crc kubenswrapper[4984]: I0130 11:07:40.627814 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m822f" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" containerID="cri-o://499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" gracePeriod=2 Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.410023 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.572403 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") pod \"56a2e96e-1695-43f9-b487-f79599171463\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.572547 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") pod \"56a2e96e-1695-43f9-b487-f79599171463\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.572774 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") pod \"56a2e96e-1695-43f9-b487-f79599171463\" (UID: \"56a2e96e-1695-43f9-b487-f79599171463\") " Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.574438 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities" (OuterVolumeSpecName: "utilities") pod "56a2e96e-1695-43f9-b487-f79599171463" (UID: "56a2e96e-1695-43f9-b487-f79599171463"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.578628 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9" (OuterVolumeSpecName: "kube-api-access-9kdf9") pod "56a2e96e-1695-43f9-b487-f79599171463" (UID: "56a2e96e-1695-43f9-b487-f79599171463"). InnerVolumeSpecName "kube-api-access-9kdf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.641520 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m822f" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.641574 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6"} Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.642339 4984 scope.go:117] "RemoveContainer" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.641522 4984 generic.go:334] "Generic (PLEG): container finished" podID="56a2e96e-1695-43f9-b487-f79599171463" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" exitCode=0 Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.642470 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m822f" event={"ID":"56a2e96e-1695-43f9-b487-f79599171463","Type":"ContainerDied","Data":"8784493f0ec0a5cbfd98cba95b8ac016872ae00f15be3a0f8f39b96f70990a0b"} Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.675610 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.675666 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kdf9\" (UniqueName: \"kubernetes.io/projected/56a2e96e-1695-43f9-b487-f79599171463-kube-api-access-9kdf9\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.676420 4984 scope.go:117] "RemoveContainer" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.714941 4984 scope.go:117] "RemoveContainer" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.718558 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56a2e96e-1695-43f9-b487-f79599171463" (UID: "56a2e96e-1695-43f9-b487-f79599171463"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.746529 4984 scope.go:117] "RemoveContainer" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" Jan 30 11:07:41 crc kubenswrapper[4984]: E0130 11:07:41.747039 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6\": container with ID starting with 499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6 not found: ID does not exist" containerID="499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747115 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6"} err="failed to get container status \"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6\": rpc error: code = NotFound desc = could not find container \"499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6\": container with ID starting with 499d857c5da4d9034169a4f933a3950ebc9b9c143ac14be0390f856c063d28b6 not found: ID does not exist" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747164 4984 scope.go:117] "RemoveContainer" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" Jan 30 11:07:41 crc kubenswrapper[4984]: E0130 11:07:41.747576 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf\": container with ID starting with 09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf not found: ID does not exist" containerID="09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747618 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf"} err="failed to get container status \"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf\": rpc error: code = NotFound desc = could not find container \"09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf\": container with ID starting with 09f339b0ef3b351ab00ae625becb8cfa1e795ec32dd1746115a3b46a34976cbf not found: ID does not exist" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.747646 4984 scope.go:117] "RemoveContainer" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" Jan 30 11:07:41 crc kubenswrapper[4984]: E0130 11:07:41.748029 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab\": container with ID starting with 27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab not found: ID does not exist" containerID="27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.748120 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab"} err="failed to get container status \"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab\": rpc error: code = NotFound desc = could not find container \"27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab\": container with ID starting with 27f93b09c61cbff1c474e0080a1c5647b99f5dfe748f4bb0f19da96eac758cab not found: ID does not exist" Jan 30 11:07:41 crc kubenswrapper[4984]: I0130 11:07:41.777057 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56a2e96e-1695-43f9-b487-f79599171463-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:07:42 crc kubenswrapper[4984]: I0130 11:07:42.017276 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:42 crc kubenswrapper[4984]: I0130 11:07:42.029927 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m822f"] Jan 30 11:07:42 crc kubenswrapper[4984]: I0130 11:07:42.102383 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a2e96e-1695-43f9-b487-f79599171463" path="/var/lib/kubelet/pods/56a2e96e-1695-43f9-b487-f79599171463/volumes" Jan 30 11:08:33 crc kubenswrapper[4984]: I0130 11:08:33.001342 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:08:33 crc kubenswrapper[4984]: I0130 11:08:33.001956 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:09:03 crc kubenswrapper[4984]: I0130 11:09:03.000932 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:09:03 crc kubenswrapper[4984]: I0130 11:09:03.001528 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:09:07 crc kubenswrapper[4984]: I0130 11:09:07.573218 4984 generic.go:334] "Generic (PLEG): container finished" podID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerID="37815ab6b9c63edd08166ccf65de1c616d66f60323976a741d216a64b5e3a4ee" exitCode=0 Jan 30 11:09:07 crc kubenswrapper[4984]: I0130 11:09:07.573387 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerDied","Data":"37815ab6b9c63edd08166ccf65de1c616d66f60323976a741d216a64b5e3a4ee"} Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.060539 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171536 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171622 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171706 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171856 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.171942 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172010 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172085 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172120 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.172164 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") pod \"2281d2df-38c2-4c96-bff0-09cf745f1e50\" (UID: \"2281d2df-38c2-4c96-bff0-09cf745f1e50\") " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.177675 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.178081 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data" (OuterVolumeSpecName: "config-data") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.179726 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.181471 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc" (OuterVolumeSpecName: "kube-api-access-m6mxc") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "kube-api-access-m6mxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.181759 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.211869 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.223880 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.224588 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.237903 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2281d2df-38c2-4c96-bff0-09cf745f1e50" (UID: "2281d2df-38c2-4c96-bff0-09cf745f1e50"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275129 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275195 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6mxc\" (UniqueName: \"kubernetes.io/projected/2281d2df-38c2-4c96-bff0-09cf745f1e50-kube-api-access-m6mxc\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275214 4984 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275226 4984 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275238 4984 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2281d2df-38c2-4c96-bff0-09cf745f1e50-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275286 4984 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275300 4984 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2281d2df-38c2-4c96-bff0-09cf745f1e50-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275373 4984 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.275393 4984 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2281d2df-38c2-4c96-bff0-09cf745f1e50-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.307955 4984 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.377783 4984 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.599801 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2281d2df-38c2-4c96-bff0-09cf745f1e50","Type":"ContainerDied","Data":"b6e20c129e5f1a30f1d5e8bbe28d03846430b2c36243a804176ef658d344f75a"} Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.599849 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6e20c129e5f1a30f1d5e8bbe28d03846430b2c36243a804176ef658d344f75a" Jan 30 11:09:09 crc kubenswrapper[4984]: I0130 11:09:09.600112 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.909394 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910680 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910701 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910720 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-content" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910745 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-content" Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910763 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerName="tempest-tests-tempest-tests-runner" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910774 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerName="tempest-tests-tempest-tests-runner" Jan 30 11:09:13 crc kubenswrapper[4984]: E0130 11:09:13.910788 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-utilities" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.910796 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="extract-utilities" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.911030 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="2281d2df-38c2-4c96-bff0-09cf745f1e50" containerName="tempest-tests-tempest-tests-runner" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.911047 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a2e96e-1695-43f9-b487-f79599171463" containerName="registry-server" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.911881 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.915517 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-68tn4" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.931885 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.978729 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:13 crc kubenswrapper[4984]: I0130 11:09:13.978779 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287hp\" (UniqueName: \"kubernetes.io/projected/d46e480c-151c-4f4c-a1c8-bbad4b31d37b-kube-api-access-287hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.081694 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.081783 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287hp\" (UniqueName: \"kubernetes.io/projected/d46e480c-151c-4f4c-a1c8-bbad4b31d37b-kube-api-access-287hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.082472 4984 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.115397 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287hp\" (UniqueName: \"kubernetes.io/projected/d46e480c-151c-4f4c-a1c8-bbad4b31d37b-kube-api-access-287hp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.134412 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d46e480c-151c-4f4c-a1c8-bbad4b31d37b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.252658 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.536867 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 30 11:09:14 crc kubenswrapper[4984]: I0130 11:09:14.658345 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d46e480c-151c-4f4c-a1c8-bbad4b31d37b","Type":"ContainerStarted","Data":"ffbd0bd5cf7579735e9bdb0137bc69592501d71382a683ff2542375d0b9b62b4"} Jan 30 11:09:15 crc kubenswrapper[4984]: I0130 11:09:15.689103 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.76024037 podStartE2EDuration="2.689078748s" podCreationTimestamp="2026-01-30 11:09:13 +0000 UTC" firstStartedPulling="2026-01-30 11:09:14.539517483 +0000 UTC m=+3459.105821347" lastFinishedPulling="2026-01-30 11:09:15.468355901 +0000 UTC m=+3460.034659725" observedRunningTime="2026-01-30 11:09:15.686901709 +0000 UTC m=+3460.253205543" watchObservedRunningTime="2026-01-30 11:09:15.689078748 +0000 UTC m=+3460.255382612" Jan 30 11:09:16 crc kubenswrapper[4984]: I0130 11:09:16.681904 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d46e480c-151c-4f4c-a1c8-bbad4b31d37b","Type":"ContainerStarted","Data":"b1bc47d761760227c1f90fe9f59df9222863fef67373339f9d1d1b59bd702678"} Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.001408 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.002240 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.002369 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.003589 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.003695 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af" gracePeriod=600 Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.865403 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af" exitCode=0 Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.865455 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af"} Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.866033 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed"} Jan 30 11:09:33 crc kubenswrapper[4984]: I0130 11:09:33.866102 4984 scope.go:117] "RemoveContainer" containerID="e0bd26784f2e810013e5cfa0145753675c4cc9ca062e392528e82e00fd2df596" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.872223 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.874509 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.880796 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gjn52"/"openshift-service-ca.crt" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.881025 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gjn52"/"kube-root-ca.crt" Jan 30 11:09:37 crc kubenswrapper[4984]: I0130 11:09:37.896988 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.009923 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.010019 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.111621 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.111719 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.112376 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.157735 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"must-gather-7xpvm\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.192674 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:09:38 crc kubenswrapper[4984]: W0130 11:09:38.666093 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19549b5_8918_4ff3_b266_67d6d2ef2c3f.slice/crio-935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7 WatchSource:0}: Error finding container 935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7: Status 404 returned error can't find the container with id 935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7 Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.677373 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:09:38 crc kubenswrapper[4984]: I0130 11:09:38.924863 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjn52/must-gather-7xpvm" event={"ID":"e19549b5-8918-4ff3-b266-67d6d2ef2c3f","Type":"ContainerStarted","Data":"935a8a8c4e30ebac97b0cbd9ddadd9a8795c13eb568874c34663645395b841e7"} Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.276373 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.279261 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.291165 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.397527 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.397618 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.397661 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.499726 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.499902 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.499972 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.500788 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.501111 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.530358 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"redhat-marketplace-f4zbk\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:46 crc kubenswrapper[4984]: I0130 11:09:46.621237 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.282751 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.287039 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.305103 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.412831 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.413041 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.413155 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514229 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514311 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514353 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514883 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.514917 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.536442 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"community-operators-cjkxq\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:47 crc kubenswrapper[4984]: I0130 11:09:47.611611 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:09:49 crc kubenswrapper[4984]: I0130 11:09:49.815193 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" podUID="48ae7d4f-38b1-40c0-ad61-815992265930" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 11:09:51 crc kubenswrapper[4984]: I0130 11:09:51.816159 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-h9smt" podUID="48ae7d4f-38b1-40c0-ad61-815992265930" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 11:09:54 crc kubenswrapper[4984]: E0130 11:09:54.216540 4984 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Jan 30 11:09:54 crc kubenswrapper[4984]: E0130 11:09:54.217116 4984 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 11:09:54 crc kubenswrapper[4984]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Jan 30 11:09:54 crc kubenswrapper[4984]: HAVE_SESSION_TOOLS=true Jan 30 11:09:54 crc kubenswrapper[4984]: else Jan 30 11:09:54 crc kubenswrapper[4984]: HAVE_SESSION_TOOLS=false Jan 30 11:09:54 crc kubenswrapper[4984]: fi Jan 30 11:09:54 crc kubenswrapper[4984]: Jan 30 11:09:54 crc kubenswrapper[4984]: Jan 30 11:09:54 crc kubenswrapper[4984]: echo "[disk usage checker] Started" Jan 30 11:09:54 crc kubenswrapper[4984]: target_dir="/must-gather" Jan 30 11:09:54 crc kubenswrapper[4984]: usage_percentage_limit="80" Jan 30 11:09:54 crc kubenswrapper[4984]: while true; do Jan 30 11:09:54 crc kubenswrapper[4984]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Jan 30 11:09:54 crc kubenswrapper[4984]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Jan 30 11:09:54 crc kubenswrapper[4984]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Jan 30 11:09:54 crc kubenswrapper[4984]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Jan 30 11:09:54 crc kubenswrapper[4984]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 30 11:09:54 crc kubenswrapper[4984]: ps -o sess --no-headers | sort -u | while read sid; do Jan 30 11:09:54 crc kubenswrapper[4984]: [[ "$sid" -eq "${$}" ]] && continue Jan 30 11:09:54 crc kubenswrapper[4984]: pkill --signal SIGKILL --session "$sid" Jan 30 11:09:54 crc kubenswrapper[4984]: done Jan 30 11:09:54 crc kubenswrapper[4984]: else Jan 30 11:09:54 crc kubenswrapper[4984]: kill 0 Jan 30 11:09:54 crc kubenswrapper[4984]: fi Jan 30 11:09:54 crc kubenswrapper[4984]: exit 1 Jan 30 11:09:54 crc kubenswrapper[4984]: fi Jan 30 11:09:54 crc kubenswrapper[4984]: sleep 5 Jan 30 11:09:54 crc kubenswrapper[4984]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Jan 30 11:09:54 crc kubenswrapper[4984]: setsid -w bash <<-MUSTGATHER_EOF Jan 30 11:09:54 crc kubenswrapper[4984]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 30 11:09:54 crc kubenswrapper[4984]: MUSTGATHER_EOF Jan 30 11:09:54 crc kubenswrapper[4984]: else Jan 30 11:09:54 crc kubenswrapper[4984]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Jan 30 11:09:54 crc kubenswrapper[4984]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmqxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-7xpvm_openshift-must-gather-gjn52(e19549b5-8918-4ff3-b266-67d6d2ef2c3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 30 11:09:54 crc kubenswrapper[4984]: > logger="UnhandledError" Jan 30 11:09:54 crc kubenswrapper[4984]: E0130 11:09:54.220580 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-gjn52/must-gather-7xpvm" podUID="e19549b5-8918-4ff3-b266-67d6d2ef2c3f" Jan 30 11:09:54 crc kubenswrapper[4984]: I0130 11:09:54.564180 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:09:54 crc kubenswrapper[4984]: W0130 11:09:54.637119 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod481c11d6_78db_4d13_a7e1_a25934756df0.slice/crio-24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e WatchSource:0}: Error finding container 24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e: Status 404 returned error can't find the container with id 24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e Jan 30 11:09:54 crc kubenswrapper[4984]: I0130 11:09:54.643443 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:09:55 crc kubenswrapper[4984]: I0130 11:09:55.123580 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerStarted","Data":"370a6116438d07c6ecca66252e5f79af0c94d4041ac1898cb8bc8db4b2616376"} Jan 30 11:09:55 crc kubenswrapper[4984]: I0130 11:09:55.125465 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerStarted","Data":"24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e"} Jan 30 11:09:55 crc kubenswrapper[4984]: E0130 11:09:55.128433 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-gjn52/must-gather-7xpvm" podUID="e19549b5-8918-4ff3-b266-67d6d2ef2c3f" Jan 30 11:09:56 crc kubenswrapper[4984]: I0130 11:09:56.152487 4984 generic.go:334] "Generic (PLEG): container finished" podID="481c11d6-78db-4d13-a7e1-a25934756df0" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" exitCode=0 Jan 30 11:09:56 crc kubenswrapper[4984]: I0130 11:09:56.152557 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152"} Jan 30 11:09:57 crc kubenswrapper[4984]: I0130 11:09:57.164542 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerID="4f6fa029dafb176ca120c9a84e5bdb9255646ab3a812e7caee5db132d502df62" exitCode=0 Jan 30 11:09:57 crc kubenswrapper[4984]: I0130 11:09:57.166313 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"4f6fa029dafb176ca120c9a84e5bdb9255646ab3a812e7caee5db132d502df62"} Jan 30 11:09:58 crc kubenswrapper[4984]: I0130 11:09:58.181574 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerStarted","Data":"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e"} Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.193020 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerID="6fbec1b473389c2ea89dfc5a4a1d947f1b6cdf436469b3bbe3b0f3e2b70a0829" exitCode=0 Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.193172 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"6fbec1b473389c2ea89dfc5a4a1d947f1b6cdf436469b3bbe3b0f3e2b70a0829"} Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.195752 4984 generic.go:334] "Generic (PLEG): container finished" podID="481c11d6-78db-4d13-a7e1-a25934756df0" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" exitCode=0 Jan 30 11:09:59 crc kubenswrapper[4984]: I0130 11:09:59.195804 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e"} Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.207337 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerStarted","Data":"7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787"} Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.210357 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerStarted","Data":"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667"} Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.238312 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cjkxq" podStartSLOduration=10.779091177 podStartE2EDuration="13.238292396s" podCreationTimestamp="2026-01-30 11:09:47 +0000 UTC" firstStartedPulling="2026-01-30 11:09:57.167899305 +0000 UTC m=+3501.734203169" lastFinishedPulling="2026-01-30 11:09:59.627100564 +0000 UTC m=+3504.193404388" observedRunningTime="2026-01-30 11:10:00.230211318 +0000 UTC m=+3504.796515152" watchObservedRunningTime="2026-01-30 11:10:00.238292396 +0000 UTC m=+3504.804596220" Jan 30 11:10:00 crc kubenswrapper[4984]: I0130 11:10:00.254360 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f4zbk" podStartSLOduration=11.826195641 podStartE2EDuration="14.25433827s" podCreationTimestamp="2026-01-30 11:09:46 +0000 UTC" firstStartedPulling="2026-01-30 11:09:57.168600874 +0000 UTC m=+3501.734904718" lastFinishedPulling="2026-01-30 11:09:59.596743483 +0000 UTC m=+3504.163047347" observedRunningTime="2026-01-30 11:10:00.246660882 +0000 UTC m=+3504.812964706" watchObservedRunningTime="2026-01-30 11:10:00.25433827 +0000 UTC m=+3504.820642094" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.512280 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.520232 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gjn52/must-gather-7xpvm"] Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.795425 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.965526 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") pod \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.965583 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") pod \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\" (UID: \"e19549b5-8918-4ff3-b266-67d6d2ef2c3f\") " Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.965953 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e19549b5-8918-4ff3-b266-67d6d2ef2c3f" (UID: "e19549b5-8918-4ff3-b266-67d6d2ef2c3f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.966173 4984 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:03 crc kubenswrapper[4984]: I0130 11:10:03.971669 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk" (OuterVolumeSpecName: "kube-api-access-jmqxk") pod "e19549b5-8918-4ff3-b266-67d6d2ef2c3f" (UID: "e19549b5-8918-4ff3-b266-67d6d2ef2c3f"). InnerVolumeSpecName "kube-api-access-jmqxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:10:04 crc kubenswrapper[4984]: I0130 11:10:04.067986 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmqxk\" (UniqueName: \"kubernetes.io/projected/e19549b5-8918-4ff3-b266-67d6d2ef2c3f-kube-api-access-jmqxk\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:04 crc kubenswrapper[4984]: I0130 11:10:04.105846 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19549b5-8918-4ff3-b266-67d6d2ef2c3f" path="/var/lib/kubelet/pods/e19549b5-8918-4ff3-b266-67d6d2ef2c3f/volumes" Jan 30 11:10:04 crc kubenswrapper[4984]: I0130 11:10:04.247424 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjn52/must-gather-7xpvm" Jan 30 11:10:06 crc kubenswrapper[4984]: I0130 11:10:06.622062 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:06 crc kubenswrapper[4984]: I0130 11:10:06.622778 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:06 crc kubenswrapper[4984]: I0130 11:10:06.711915 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.343712 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.612614 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.612668 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:07 crc kubenswrapper[4984]: I0130 11:10:07.677534 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:08 crc kubenswrapper[4984]: I0130 11:10:08.351110 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:09 crc kubenswrapper[4984]: I0130 11:10:09.703858 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:10:09 crc kubenswrapper[4984]: I0130 11:10:09.704357 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f4zbk" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" containerID="cri-o://49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" gracePeriod=2 Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.222597 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311367 4984 generic.go:334] "Generic (PLEG): container finished" podID="481c11d6-78db-4d13-a7e1-a25934756df0" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" exitCode=0 Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311415 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667"} Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311444 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f4zbk" event={"ID":"481c11d6-78db-4d13-a7e1-a25934756df0","Type":"ContainerDied","Data":"24bbecc2c2ee5e00e8dcb14c9ed4b50cf7d97d2dd9dca22bfd417ddb5bab381e"} Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311464 4984 scope.go:117] "RemoveContainer" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.311604 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f4zbk" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.347924 4984 scope.go:117] "RemoveContainer" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.368386 4984 scope.go:117] "RemoveContainer" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.405987 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") pod \"481c11d6-78db-4d13-a7e1-a25934756df0\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.406160 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") pod \"481c11d6-78db-4d13-a7e1-a25934756df0\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.406294 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") pod \"481c11d6-78db-4d13-a7e1-a25934756df0\" (UID: \"481c11d6-78db-4d13-a7e1-a25934756df0\") " Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.407744 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities" (OuterVolumeSpecName: "utilities") pod "481c11d6-78db-4d13-a7e1-a25934756df0" (UID: "481c11d6-78db-4d13-a7e1-a25934756df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.409519 4984 scope.go:117] "RemoveContainer" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" Jan 30 11:10:10 crc kubenswrapper[4984]: E0130 11:10:10.409935 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667\": container with ID starting with 49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667 not found: ID does not exist" containerID="49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.409969 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667"} err="failed to get container status \"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667\": rpc error: code = NotFound desc = could not find container \"49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667\": container with ID starting with 49e0c739079bf35e331351580f102f0fc5b6ead3834f4fb055170c7707230667 not found: ID does not exist" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.409996 4984 scope.go:117] "RemoveContainer" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" Jan 30 11:10:10 crc kubenswrapper[4984]: E0130 11:10:10.410279 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e\": container with ID starting with 112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e not found: ID does not exist" containerID="112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.410311 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e"} err="failed to get container status \"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e\": rpc error: code = NotFound desc = could not find container \"112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e\": container with ID starting with 112e1b3e116eda648583c29d9c05b79b291c1ff77394b9c5cece2005424ec49e not found: ID does not exist" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.410325 4984 scope.go:117] "RemoveContainer" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" Jan 30 11:10:10 crc kubenswrapper[4984]: E0130 11:10:10.410677 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152\": container with ID starting with db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152 not found: ID does not exist" containerID="db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.410696 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152"} err="failed to get container status \"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152\": rpc error: code = NotFound desc = could not find container \"db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152\": container with ID starting with db678107ab4c483171b0a0740bccb0dd13d242e29147f40f4d3b4a0a459df152 not found: ID does not exist" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.413548 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7" (OuterVolumeSpecName: "kube-api-access-ppdx7") pod "481c11d6-78db-4d13-a7e1-a25934756df0" (UID: "481c11d6-78db-4d13-a7e1-a25934756df0"). InnerVolumeSpecName "kube-api-access-ppdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.438280 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481c11d6-78db-4d13-a7e1-a25934756df0" (UID: "481c11d6-78db-4d13-a7e1-a25934756df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.508679 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppdx7\" (UniqueName: \"kubernetes.io/projected/481c11d6-78db-4d13-a7e1-a25934756df0-kube-api-access-ppdx7\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.508710 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.508719 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481c11d6-78db-4d13-a7e1-a25934756df0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.647992 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:10:10 crc kubenswrapper[4984]: I0130 11:10:10.655327 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f4zbk"] Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.096517 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.100665 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cjkxq" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" containerID="cri-o://7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787" gracePeriod=2 Jan 30 11:10:11 crc kubenswrapper[4984]: E0130 11:10:11.134505 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded097268_a63b_4ff5_ba86_a5717af5a2ad.slice/crio-7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787.scope\": RecentStats: unable to find data in memory cache]" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.337918 4984 generic.go:334] "Generic (PLEG): container finished" podID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerID="7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787" exitCode=0 Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.337979 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787"} Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.538540 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.630009 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") pod \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.630077 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") pod \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.630112 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") pod \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\" (UID: \"ed097268-a63b-4ff5-ba86-a5717af5a2ad\") " Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.631931 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities" (OuterVolumeSpecName: "utilities") pod "ed097268-a63b-4ff5-ba86-a5717af5a2ad" (UID: "ed097268-a63b-4ff5-ba86-a5717af5a2ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.635921 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n" (OuterVolumeSpecName: "kube-api-access-rx46n") pod "ed097268-a63b-4ff5-ba86-a5717af5a2ad" (UID: "ed097268-a63b-4ff5-ba86-a5717af5a2ad"). InnerVolumeSpecName "kube-api-access-rx46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.683215 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed097268-a63b-4ff5-ba86-a5717af5a2ad" (UID: "ed097268-a63b-4ff5-ba86-a5717af5a2ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.732921 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.733159 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed097268-a63b-4ff5-ba86-a5717af5a2ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:11 crc kubenswrapper[4984]: I0130 11:10:11.733174 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx46n\" (UniqueName: \"kubernetes.io/projected/ed097268-a63b-4ff5-ba86-a5717af5a2ad-kube-api-access-rx46n\") on node \"crc\" DevicePath \"\"" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.103048 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" path="/var/lib/kubelet/pods/481c11d6-78db-4d13-a7e1-a25934756df0/volumes" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.351096 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjkxq" event={"ID":"ed097268-a63b-4ff5-ba86-a5717af5a2ad","Type":"ContainerDied","Data":"370a6116438d07c6ecca66252e5f79af0c94d4041ac1898cb8bc8db4b2616376"} Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.351155 4984 scope.go:117] "RemoveContainer" containerID="7148d3255dc1fd63194bc45cbfcd5fc635a96a60b8b2dc405d9eb262adaae787" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.351181 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjkxq" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.373109 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.380850 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cjkxq"] Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.384514 4984 scope.go:117] "RemoveContainer" containerID="6fbec1b473389c2ea89dfc5a4a1d947f1b6cdf436469b3bbe3b0f3e2b70a0829" Jan 30 11:10:12 crc kubenswrapper[4984]: I0130 11:10:12.420398 4984 scope.go:117] "RemoveContainer" containerID="4f6fa029dafb176ca120c9a84e5bdb9255646ab3a812e7caee5db132d502df62" Jan 30 11:10:14 crc kubenswrapper[4984]: I0130 11:10:14.103165 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" path="/var/lib/kubelet/pods/ed097268-a63b-4ff5-ba86-a5717af5a2ad/volumes" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.562642 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563447 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563459 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563469 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563476 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563501 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563507 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563515 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563522 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563537 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563544 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="extract-utilities" Jan 30 11:10:53 crc kubenswrapper[4984]: E0130 11:10:53.563555 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563560 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="extract-content" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563784 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed097268-a63b-4ff5-ba86-a5717af5a2ad" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.563797 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="481c11d6-78db-4d13-a7e1-a25934756df0" containerName="registry-server" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.564794 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.567447 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fkd9b"/"default-dockercfg-xc97d" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.567523 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fkd9b"/"kube-root-ca.crt" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.567706 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fkd9b"/"openshift-service-ca.crt" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.574518 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.701638 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.701873 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.803611 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.803803 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.804388 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.824334 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"must-gather-clm44\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:53 crc kubenswrapper[4984]: I0130 11:10:53.882574 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:10:54 crc kubenswrapper[4984]: I0130 11:10:54.432493 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:10:54 crc kubenswrapper[4984]: I0130 11:10:54.882307 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerStarted","Data":"92e57bf5d4c242d464a31af3fe1fba7441c13337468fb1e746c40edbe2adfcf4"} Jan 30 11:10:55 crc kubenswrapper[4984]: I0130 11:10:55.895829 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerStarted","Data":"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad"} Jan 30 11:10:55 crc kubenswrapper[4984]: I0130 11:10:55.896213 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerStarted","Data":"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6"} Jan 30 11:10:55 crc kubenswrapper[4984]: I0130 11:10:55.929589 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fkd9b/must-gather-clm44" podStartSLOduration=2.413636158 podStartE2EDuration="2.929571344s" podCreationTimestamp="2026-01-30 11:10:53 +0000 UTC" firstStartedPulling="2026-01-30 11:10:54.46336454 +0000 UTC m=+3559.029668364" lastFinishedPulling="2026-01-30 11:10:54.979299686 +0000 UTC m=+3559.545603550" observedRunningTime="2026-01-30 11:10:55.927405806 +0000 UTC m=+3560.493709670" watchObservedRunningTime="2026-01-30 11:10:55.929571344 +0000 UTC m=+3560.495875168" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.211992 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-8v7mz"] Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.213872 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.251984 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.252059 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.354365 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.354445 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.354529 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.384219 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"crc-debug-8v7mz\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.533580 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:10:59 crc kubenswrapper[4984]: I0130 11:10:59.936483 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" event={"ID":"fc71ca13-3f19-4c0e-8245-1656fc723d67","Type":"ContainerStarted","Data":"c4a7e615730dfe10a68536f14b62ed012291f023cbf341e261c7a8b5734a82e7"} Jan 30 11:11:11 crc kubenswrapper[4984]: I0130 11:11:11.036461 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" event={"ID":"fc71ca13-3f19-4c0e-8245-1656fc723d67","Type":"ContainerStarted","Data":"2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0"} Jan 30 11:11:11 crc kubenswrapper[4984]: I0130 11:11:11.054241 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" podStartSLOduration=1.719969952 podStartE2EDuration="12.054222942s" podCreationTimestamp="2026-01-30 11:10:59 +0000 UTC" firstStartedPulling="2026-01-30 11:10:59.569597744 +0000 UTC m=+3564.135901568" lastFinishedPulling="2026-01-30 11:11:09.903850734 +0000 UTC m=+3574.470154558" observedRunningTime="2026-01-30 11:11:11.048305562 +0000 UTC m=+3575.614609396" watchObservedRunningTime="2026-01-30 11:11:11.054222942 +0000 UTC m=+3575.620526766" Jan 30 11:11:33 crc kubenswrapper[4984]: I0130 11:11:33.001198 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:11:33 crc kubenswrapper[4984]: I0130 11:11:33.001696 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:11:51 crc kubenswrapper[4984]: I0130 11:11:51.420163 4984 generic.go:334] "Generic (PLEG): container finished" podID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerID="2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0" exitCode=0 Jan 30 11:11:51 crc kubenswrapper[4984]: I0130 11:11:51.420245 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" event={"ID":"fc71ca13-3f19-4c0e-8245-1656fc723d67","Type":"ContainerDied","Data":"2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0"} Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.541950 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.590953 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") pod \"fc71ca13-3f19-4c0e-8245-1656fc723d67\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.591089 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host" (OuterVolumeSpecName: "host") pod "fc71ca13-3f19-4c0e-8245-1656fc723d67" (UID: "fc71ca13-3f19-4c0e-8245-1656fc723d67"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.591366 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") pod \"fc71ca13-3f19-4c0e-8245-1656fc723d67\" (UID: \"fc71ca13-3f19-4c0e-8245-1656fc723d67\") " Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.592014 4984 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc71ca13-3f19-4c0e-8245-1656fc723d67-host\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.600318 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-8v7mz"] Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.602973 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx" (OuterVolumeSpecName: "kube-api-access-z6lkx") pod "fc71ca13-3f19-4c0e-8245-1656fc723d67" (UID: "fc71ca13-3f19-4c0e-8245-1656fc723d67"). InnerVolumeSpecName "kube-api-access-z6lkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.609939 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-8v7mz"] Jan 30 11:11:52 crc kubenswrapper[4984]: I0130 11:11:52.693459 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6lkx\" (UniqueName: \"kubernetes.io/projected/fc71ca13-3f19-4c0e-8245-1656fc723d67-kube-api-access-z6lkx\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.439973 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a7e615730dfe10a68536f14b62ed012291f023cbf341e261c7a8b5734a82e7" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.440120 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-8v7mz" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.783190 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-2nn68"] Jan 30 11:11:53 crc kubenswrapper[4984]: E0130 11:11:53.783610 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerName="container-00" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.783624 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerName="container-00" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.783803 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" containerName="container-00" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.784706 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.926354 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:53 crc kubenswrapper[4984]: I0130 11:11:53.926537 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.028897 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.029088 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.029151 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.053841 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"crc-debug-2nn68\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.105520 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.111115 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc71ca13-3f19-4c0e-8245-1656fc723d67" path="/var/lib/kubelet/pods/fc71ca13-3f19-4c0e-8245-1656fc723d67/volumes" Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.455741 4984 generic.go:334] "Generic (PLEG): container finished" podID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerID="0fcfb36280e7fbeb4a457398a6d3612434afcbb9929764166a0472904552ce68" exitCode=0 Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.455836 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" event={"ID":"004731e7-03ce-4a34-919a-3cfcc05195a4","Type":"ContainerDied","Data":"0fcfb36280e7fbeb4a457398a6d3612434afcbb9929764166a0472904552ce68"} Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.456280 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" event={"ID":"004731e7-03ce-4a34-919a-3cfcc05195a4","Type":"ContainerStarted","Data":"46c4bd630060d2892d1426c3630934eeec19495d3caf7a251009deb5ba67ff3a"} Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.969107 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-2nn68"] Jan 30 11:11:54 crc kubenswrapper[4984]: I0130 11:11:54.977088 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-2nn68"] Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.555428 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.661606 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") pod \"004731e7-03ce-4a34-919a-3cfcc05195a4\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.662018 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") pod \"004731e7-03ce-4a34-919a-3cfcc05195a4\" (UID: \"004731e7-03ce-4a34-919a-3cfcc05195a4\") " Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.662133 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host" (OuterVolumeSpecName: "host") pod "004731e7-03ce-4a34-919a-3cfcc05195a4" (UID: "004731e7-03ce-4a34-919a-3cfcc05195a4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.662923 4984 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/004731e7-03ce-4a34-919a-3cfcc05195a4-host\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.669800 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4" (OuterVolumeSpecName: "kube-api-access-swcn4") pod "004731e7-03ce-4a34-919a-3cfcc05195a4" (UID: "004731e7-03ce-4a34-919a-3cfcc05195a4"). InnerVolumeSpecName "kube-api-access-swcn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:11:55 crc kubenswrapper[4984]: I0130 11:11:55.764602 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swcn4\" (UniqueName: \"kubernetes.io/projected/004731e7-03ce-4a34-919a-3cfcc05195a4-kube-api-access-swcn4\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.108516 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" path="/var/lib/kubelet/pods/004731e7-03ce-4a34-919a-3cfcc05195a4/volumes" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.220421 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-47684"] Jan 30 11:11:56 crc kubenswrapper[4984]: E0130 11:11:56.220988 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerName="container-00" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.221019 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerName="container-00" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.221385 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="004731e7-03ce-4a34-919a-3cfcc05195a4" containerName="container-00" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.222335 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.275283 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.275507 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.378479 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.378725 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.378728 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.411067 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"crc-debug-47684\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.479655 4984 scope.go:117] "RemoveContainer" containerID="0fcfb36280e7fbeb4a457398a6d3612434afcbb9929764166a0472904552ce68" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.479773 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-2nn68" Jan 30 11:11:56 crc kubenswrapper[4984]: I0130 11:11:56.539196 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:56 crc kubenswrapper[4984]: W0130 11:11:56.584670 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71550211_cb32_4484_9ebf_6ea10af9bf54.slice/crio-c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8 WatchSource:0}: Error finding container c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8: Status 404 returned error can't find the container with id c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8 Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.494710 4984 generic.go:334] "Generic (PLEG): container finished" podID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerID="11137b0f1e7bdd58c1eab12c664b2e9087ca7e616952c8a2fd6a95aa242a172b" exitCode=0 Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.494812 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-47684" event={"ID":"71550211-cb32-4484-9ebf-6ea10af9bf54","Type":"ContainerDied","Data":"11137b0f1e7bdd58c1eab12c664b2e9087ca7e616952c8a2fd6a95aa242a172b"} Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.495354 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/crc-debug-47684" event={"ID":"71550211-cb32-4484-9ebf-6ea10af9bf54","Type":"ContainerStarted","Data":"c626f31280c7443cca045386db361005dfd2d4ad302edf7c704a2836075c6ca8"} Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.554676 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-47684"] Jan 30 11:11:57 crc kubenswrapper[4984]: I0130 11:11:57.574171 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/crc-debug-47684"] Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.622131 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.732766 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") pod \"71550211-cb32-4484-9ebf-6ea10af9bf54\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.732901 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") pod \"71550211-cb32-4484-9ebf-6ea10af9bf54\" (UID: \"71550211-cb32-4484-9ebf-6ea10af9bf54\") " Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.732969 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host" (OuterVolumeSpecName: "host") pod "71550211-cb32-4484-9ebf-6ea10af9bf54" (UID: "71550211-cb32-4484-9ebf-6ea10af9bf54"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.733578 4984 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71550211-cb32-4484-9ebf-6ea10af9bf54-host\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.737331 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj" (OuterVolumeSpecName: "kube-api-access-wlbvj") pod "71550211-cb32-4484-9ebf-6ea10af9bf54" (UID: "71550211-cb32-4484-9ebf-6ea10af9bf54"). InnerVolumeSpecName "kube-api-access-wlbvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:11:58 crc kubenswrapper[4984]: I0130 11:11:58.835470 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlbvj\" (UniqueName: \"kubernetes.io/projected/71550211-cb32-4484-9ebf-6ea10af9bf54-kube-api-access-wlbvj\") on node \"crc\" DevicePath \"\"" Jan 30 11:11:59 crc kubenswrapper[4984]: I0130 11:11:59.517616 4984 scope.go:117] "RemoveContainer" containerID="11137b0f1e7bdd58c1eab12c664b2e9087ca7e616952c8a2fd6a95aa242a172b" Jan 30 11:11:59 crc kubenswrapper[4984]: I0130 11:11:59.517674 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/crc-debug-47684" Jan 30 11:12:00 crc kubenswrapper[4984]: I0130 11:12:00.102134 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" path="/var/lib/kubelet/pods/71550211-cb32-4484-9ebf-6ea10af9bf54/volumes" Jan 30 11:12:03 crc kubenswrapper[4984]: I0130 11:12:03.000601 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:12:03 crc kubenswrapper[4984]: I0130 11:12:03.000982 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:12:12 crc kubenswrapper[4984]: I0130 11:12:12.905765 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cfd8d5fd8-lwgk4_217935e2-7a1e-44a6-b6fd-e64c41155d6d/barbican-api/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.054348 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6cfd8d5fd8-lwgk4_217935e2-7a1e-44a6-b6fd-e64c41155d6d/barbican-api-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.111999 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75ff98474b-zm29s_1368411d-c934-4d15-a67b-dc840dbe010d/barbican-keystone-listener/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.178894 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75ff98474b-zm29s_1368411d-c934-4d15-a67b-dc840dbe010d/barbican-keystone-listener-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.285739 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664bd6b5fc-shfjg_aa6393c8-34de-43fc-9a00-a0f87b31d8e8/barbican-worker-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.309153 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664bd6b5fc-shfjg_aa6393c8-34de-43fc-9a00-a0f87b31d8e8/barbican-worker/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.454488 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-szlmd_ba20d4a0-7acc-4813-8fa9-6f166802bd04/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.511550 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/ceilometer-central-agent/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.600923 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/ceilometer-notification-agent/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.656154 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/proxy-httpd/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.691739 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_aa8fceae-cb31-48dd-8104-9a905f788af6/sg-core/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.824558 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a8d6abba-9a6d-4a99-a68b-659c1e111893/cinder-api/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.880044 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a8d6abba-9a6d-4a99-a68b-659c1e111893/cinder-api-log/0.log" Jan 30 11:12:13 crc kubenswrapper[4984]: I0130 11:12:13.964945 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4ced7140-d346-43c7-9139-7f460af079e2/cinder-scheduler/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.012148 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4ced7140-d346-43c7-9139-7f460af079e2/probe/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.135878 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-dnzpg_ed90c997-eddb-4afb-ae0d-31dd3ef4c485/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.244550 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-blm26_5ca6f868-9db4-483a-bea5-dc471b160721/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.374189 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-fvwt9_f3033afa-9ac2-4f32-a02d-372dcdbeb984/init/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.502466 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-fvwt9_f3033afa-9ac2-4f32-a02d-372dcdbeb984/init/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.563413 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-fvwt9_f3033afa-9ac2-4f32-a02d-372dcdbeb984/dnsmasq-dns/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.652808 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zfts7_8414dabf-1fa1-4a4c-8db5-55ef7397164d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.782846 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2fa01bff-d884-4b1f-b0c2-8c0fbd957a30/glance-log/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.804656 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2fa01bff-d884-4b1f-b0c2-8c0fbd957a30/glance-httpd/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.949866 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_96bc5a16-54a8-4008-98ea-3adb9b24e9fa/glance-httpd/0.log" Jan 30 11:12:14 crc kubenswrapper[4984]: I0130 11:12:14.955579 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_96bc5a16-54a8-4008-98ea-3adb9b24e9fa/glance-log/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.159957 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb76cb6cb-wtx8d_d1c7d24e-f131-485d-aaec-80a94d7ddd96/horizon/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.250801 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vnlf9_908eb334-fac2-41ed-96d6-d7c80f8e98b3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.415784 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cb76cb6cb-wtx8d_d1c7d24e-f131-485d-aaec-80a94d7ddd96/horizon-log/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.417433 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6cgx8_875c90f8-2855-43ce-993f-fa64c7d92c66/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.565775 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496181-vtxhk_a5d9b60c-98e5-4132-9193-0b13ac2893a5/keystone-cron/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.708648 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-9fd9687b7-kdppr_0cddf025-bb36-4984-82b8-360ab9f3d91c/keystone-api/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.730892 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6dbb6fd7-4483-4431-89a8-b4e8aa06a4fa/kube-state-metrics/0.log" Jan 30 11:12:15 crc kubenswrapper[4984]: I0130 11:12:15.931758 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gzcdm_d3ca7cba-514d-4761-821d-9b48578f0cc3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.276074 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5565c8d7-xqnh6_0e442774-b2c1-418a-a5b2-edfd20f23c27/neutron-httpd/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.284632 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5565c8d7-xqnh6_0e442774-b2c1-418a-a5b2-edfd20f23c27/neutron-api/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.456699 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kxctp_4549607f-18ca-42e1-8c2b-b7d9793e2005/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.908171 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4d02a683-2231-4e04-89bb-748baf8bc65d/nova-cell0-conductor-conductor/0.log" Jan 30 11:12:16 crc kubenswrapper[4984]: I0130 11:12:16.964031 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2a8b1830-c479-4612-a461-7cb46d2c949f/nova-api-log/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.098645 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2a8b1830-c479-4612-a461-7cb46d2c949f/nova-api-api/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.263283 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5b097926-177e-428a-a271-ede45f90f7d6/nova-cell1-conductor-conductor/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.270888 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3933f23e-210c-483f-82ec-eb0cdbc09f4c/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.417832 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lrcvm_eaa18315-192f-412f-b94c-708c98209a5a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.516747 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0538ab81-6e35-473d-860f-7f680671646d/nova-metadata-log/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.847963 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66296a3e-33af-496f-a870-9d0932aa4178/mysql-bootstrap/0.log" Jan 30 11:12:17 crc kubenswrapper[4984]: I0130 11:12:17.860949 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d79d0dc1-f229-4dd7-9d7c-a0e420d6452d/nova-scheduler-scheduler/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.092881 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66296a3e-33af-496f-a870-9d0932aa4178/mysql-bootstrap/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.095549 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66296a3e-33af-496f-a870-9d0932aa4178/galera/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.287346 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4717968-368b-4b9d-acca-b2aee21abd1f/mysql-bootstrap/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.493118 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4717968-368b-4b9d-acca-b2aee21abd1f/mysql-bootstrap/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.555366 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c4717968-368b-4b9d-acca-b2aee21abd1f/galera/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.663365 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_141e094b-e8c8-4a61-b93c-8dec5ac89823/openstackclient/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.669966 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0538ab81-6e35-473d-860f-7f680671646d/nova-metadata-metadata/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.787195 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-m4spx_63184ee8-263b-4506-8844-4ae4fd2a80c7/ovn-controller/0.log" Jan 30 11:12:18 crc kubenswrapper[4984]: I0130 11:12:18.920001 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ms66d_dbcc0b77-42fd-47ec-9b91-94e2c070c0ec/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.029643 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovsdb-server-init/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.202276 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovsdb-server/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.240988 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovsdb-server-init/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.253519 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-js4wt_c2590fda-d6e0-4182-96ef-8326001108d9/ovs-vswitchd/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.433026 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-57hv6_2f986324-c570-4c65-aed1-952aa2538af8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.469386 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e86681f0-5ba9-45f2-b0b7-0b9a49dc6706/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.506384 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e86681f0-5ba9-45f2-b0b7-0b9a49dc6706/ovn-northd/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.627600 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.657428 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ce6abda1-5d7b-4eb8-bd5b-cd8c153e35b4/ovsdbserver-nb/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.826834 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_53bd6a11-6ac6-4b0e-ae41-8afd88f351e6/openstack-network-exporter/0.log" Jan 30 11:12:19 crc kubenswrapper[4984]: I0130 11:12:19.860562 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_53bd6a11-6ac6-4b0e-ae41-8afd88f351e6/ovsdbserver-sb/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.035388 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68474f84b8-6pzwt_34cd991a-90cf-410c-828d-db99caf6dcea/placement-api/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.070032 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68474f84b8-6pzwt_34cd991a-90cf-410c-828d-db99caf6dcea/placement-log/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.129479 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92837592-8d1a-4eec-9c06-1d906b4724c2/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.379485 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92837592-8d1a-4eec-9c06-1d906b4724c2/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.392195 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_92837592-8d1a-4eec-9c06-1d906b4724c2/rabbitmq/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.392952 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_137801a7-4625-4c4c-a855-8ecdf65e509a/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.579998 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_137801a7-4625-4c4c-a855-8ecdf65e509a/setup-container/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.616822 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_137801a7-4625-4c4c-a855-8ecdf65e509a/rabbitmq/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.695178 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-f4z45_b6b5ab38-6c9b-4526-bbee-d3a4c460ea78/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.804153 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-pgdm4_049a948c-1945-4217-b728-7f39570dd740/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:20 crc kubenswrapper[4984]: I0130 11:12:20.949151 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vrbbn_1985e15d-70be-4079-bd48-55c782dfcba7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.051792 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7fxcn_b337ec46-c5ba-4b83-91f7-ad4b826d9595/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.145079 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ds8rj_1e567c3d-d9b0-4be3-ad02-21a342ce33fd/ssh-known-hosts-edpm-deployment/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.417541 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f6d8f475-hmb99_a88ca399-adf6-4df4-8216-84de7603712b/proxy-server/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.438029 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f6d8f475-hmb99_a88ca399-adf6-4df4-8216-84de7603712b/proxy-httpd/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.613151 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j9rvs_7cfe4feb-b1bb-4904-9955-c5833ef34e9e/swift-ring-rebalance/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.630763 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-auditor/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.695118 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-reaper/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.847294 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-auditor/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.877128 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-replicator/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.882971 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/account-server/0.log" Jan 30 11:12:21 crc kubenswrapper[4984]: I0130 11:12:21.933974 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-replicator/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.023868 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-server/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.087662 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-auditor/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.117101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/container-updater/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.149179 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-expirer/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.265736 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-replicator/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.272063 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-server/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.352991 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/object-updater/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.422532 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/rsync/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.492319 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b286d6-b58f-4d49-ae49-e3acdc77b7f5/swift-recon-cron/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.655776 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-npmxf_2498ca77-0e58-4af1-b59d-c19e6b11f2f9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.691219 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2281d2df-38c2-4c96-bff0-09cf745f1e50/tempest-tests-tempest-tests-runner/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.836550 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d46e480c-151c-4f4c-a1c8-bbad4b31d37b/test-operator-logs-container/0.log" Jan 30 11:12:22 crc kubenswrapper[4984]: I0130 11:12:22.898638 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-z6tm5_d0aef065-96aa-4cd6-9069-627c5f97fcc3/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 11:12:32 crc kubenswrapper[4984]: I0130 11:12:32.426609 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ab30531b-1df7-460e-956c-bc849792098b/memcached/0.log" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.000480 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.000537 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.000582 4984 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.001132 4984 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed"} pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.001190 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" containerID="cri-o://fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" gracePeriod=600 Jan 30 11:12:33 crc kubenswrapper[4984]: E0130 11:12:33.121036 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.804021 4984 generic.go:334] "Generic (PLEG): container finished" podID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" exitCode=0 Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.804112 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerDied","Data":"fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed"} Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.804430 4984 scope.go:117] "RemoveContainer" containerID="d64c99bfc23d5f2bcaeeb039253b1f5f097b14bb7674f64a03143a7286d332af" Jan 30 11:12:33 crc kubenswrapper[4984]: I0130 11:12:33.805229 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:12:33 crc kubenswrapper[4984]: E0130 11:12:33.805606 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:12:45 crc kubenswrapper[4984]: I0130 11:12:45.091838 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:12:45 crc kubenswrapper[4984]: E0130 11:12:45.092801 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.561567 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-sxpfj_5d977367-099f-4a10-bf37-9e9cd913932e/manager/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.677239 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-cnxbk_74bafe89-dc08-4029-823c-f0c3579b8d6b/manager/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.756289 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/util/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.892971 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/util/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.961378 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/pull/0.log" Jan 30 11:12:47 crc kubenswrapper[4984]: I0130 11:12:47.974360 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/pull/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.172330 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/pull/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.191994 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/util/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.203055 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d2eed9ebc341f837afe415533f37467de0a385f9f653cec9f33aca9b8c8kmgw_66ab9762-201b-40f3-8d9b-1d114a7d778e/extract/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.316218 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-b674n_8c70fc0b-a348-4dcd-8fc3-9afa1c22318e/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.409918 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-tjfpn_254d2d7e-3636-429d-b043-501d76db73e9/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.495168 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-zl2fj_5e7c3856-3562-4cb4-b131-48302c43ce25/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.628638 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-zzd6d_7a6dd1f5-d0b6-49a6-9270-dd98f2147932/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.792764 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-8hrrf_3899fe05-64bb-48b9-88dc-2341ad9bc00b/manager/0.log" Jan 30 11:12:48 crc kubenswrapper[4984]: I0130 11:12:48.889701 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-t5j55_e420c57f-7248-4454-926f-48766e48236c/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.033340 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-zwc2t_dd895dbf-b809-498c-95fd-dfd09a9eeb4d/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.047970 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-2wvrh_739ed1d4-c090-4166-9352-d048e0b281d6/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.237058 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-t75dn_67a8ae49-7f19-47bc-8e54-0873c535f6ff/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.299800 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-2tbcn_1d30b9a6-fe73-4e32-9095-65b1950f7afe/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.472719 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-gcbx5_ab1e50a1-4d8f-45f4-8fa0-fd4732dce6f1/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.485467 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-sh7cp_c6ee91ae-9b91-46a7-ad2a-c67133a4f40e/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.644923 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4df2g45_8d22f0a7-a541-405b-8146-fb098d02ddcc/manager/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.763431 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7d4ff8bbbc-68r69_f4b80c7c-3e81-48d4-862c-684369655891/operator/0.log" Jan 30 11:12:49 crc kubenswrapper[4984]: I0130 11:12:49.933453 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nqgjv_be54871d-c3f5-40bc-b6cd-63602755ca51/registry-server/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.250141 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-28kkh_bb50c219-6036-48d0-8568-0a1601150272/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.252982 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-fx6t9_69e058b7-deda-4eb8-9cac-6bc08032b3bf/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.532766 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vpt86_e8bf6651-ff58-478c-be28-39732dac675b/operator/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.618958 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-jvcvp_c3eec896-3441-4b0e-a7e5-4bde717dbccd/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.824032 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-r7hs4_df5d4f32-b49b-46ea-8aac-a3b76b2f8f00/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.901200 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4lz58_350834d1-9352-4ca5-9c8a-acf60193ebc8/manager/0.log" Jan 30 11:12:50 crc kubenswrapper[4984]: I0130 11:12:50.964327 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8d786f48c-jtznv_87613c07-d864-4440-b31c-03c4bb3f8ce0/manager/0.log" Jan 30 11:12:51 crc kubenswrapper[4984]: I0130 11:12:51.051797 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-h7pcb_9a53674a-07ad-4bfc-80c8-f55bcc286eb0/manager/0.log" Jan 30 11:12:58 crc kubenswrapper[4984]: I0130 11:12:58.090657 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:12:58 crc kubenswrapper[4984]: E0130 11:12:58.091437 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:10 crc kubenswrapper[4984]: I0130 11:13:10.096863 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:10 crc kubenswrapper[4984]: E0130 11:13:10.097568 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:11 crc kubenswrapper[4984]: I0130 11:13:11.627921 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g5m7t_3c2dcd5a-96f0-48ff-a004-9764d24b66b1/control-plane-machine-set-operator/0.log" Jan 30 11:13:11 crc kubenswrapper[4984]: I0130 11:13:11.744379 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b9k4d_218f0398-9175-448b-83b8-6445e2c3df37/kube-rbac-proxy/0.log" Jan 30 11:13:11 crc kubenswrapper[4984]: I0130 11:13:11.790185 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b9k4d_218f0398-9175-448b-83b8-6445e2c3df37/machine-api-operator/0.log" Jan 30 11:13:22 crc kubenswrapper[4984]: I0130 11:13:22.090422 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:22 crc kubenswrapper[4984]: E0130 11:13:22.091706 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:25 crc kubenswrapper[4984]: I0130 11:13:25.302450 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rlb95_f1c83115-1333-4064-8217-eb2edae57d74/cert-manager-controller/0.log" Jan 30 11:13:25 crc kubenswrapper[4984]: I0130 11:13:25.454125 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2f5gm_c7557472-15a5-48a9-8a84-bd8478d45a4b/cert-manager-cainjector/0.log" Jan 30 11:13:25 crc kubenswrapper[4984]: I0130 11:13:25.510281 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-r7gsp_4a218ad6-abfb-49ac-9f07-a79d9f3bd07e/cert-manager-webhook/0.log" Jan 30 11:13:34 crc kubenswrapper[4984]: I0130 11:13:34.090433 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:34 crc kubenswrapper[4984]: E0130 11:13:34.091323 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.340624 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mpwzb_471cb540-b50e-4adb-8984-65c46a7f9714/nmstate-console-plugin/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.520039 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vh6vz_88dac402-7307-465d-b5a0-61762ee570c6/nmstate-handler/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.521481 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7x2rq_f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf/kube-rbac-proxy/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.612032 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7x2rq_f66d2ef8-abab-4e7e-ab7e-75cb4e8df0cf/nmstate-metrics/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.719313 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tl42h_ce2396e1-20f3-4b5a-b3ab-4e8496d6c58b/nmstate-operator/0.log" Jan 30 11:13:38 crc kubenswrapper[4984]: I0130 11:13:38.790774 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-gnkrh_739c7b03-ba6e-48de-a07b-6bd4206c206f/nmstate-webhook/0.log" Jan 30 11:13:46 crc kubenswrapper[4984]: I0130 11:13:46.095673 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:46 crc kubenswrapper[4984]: E0130 11:13:46.096396 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:13:58 crc kubenswrapper[4984]: I0130 11:13:58.107849 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:13:58 crc kubenswrapper[4984]: E0130 11:13:58.108772 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.126918 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4tngn_2ae05bf6-d99c-4fb1-9780-20249ec78e1e/kube-rbac-proxy/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.190272 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4tngn_2ae05bf6-d99c-4fb1-9780-20249ec78e1e/controller/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.328385 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-j62qw_7e54bb11-7cfb-4840-b861-bd6d184c36f4/frr-k8s-webhook-server/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.385206 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.553901 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.619821 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.619962 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.621435 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.764833 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.807092 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.860935 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:06 crc kubenswrapper[4984]: I0130 11:14:06.870154 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.053127 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-metrics/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.063407 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/controller/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.077699 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-reloader/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.081997 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/cp-frr-files/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.256672 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/kube-rbac-proxy-frr/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.305775 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/frr-metrics/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.321887 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/kube-rbac-proxy/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.447275 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/reloader/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.544333 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-f5cdbcd49-plrfk_fb5cf2c1-4334-4aee-9f94-2f1c2797b484/manager/0.log" Jan 30 11:14:07 crc kubenswrapper[4984]: I0130 11:14:07.754595 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86d4db4f7b-qz6m4_b73da0f4-c46a-4e4f-bdf8-3f3663bd1b05/webhook-server/0.log" Jan 30 11:14:08 crc kubenswrapper[4984]: I0130 11:14:08.176593 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wc8c7_07684256-0759-426a-9ba0-40514aa3e7ac/kube-rbac-proxy/0.log" Jan 30 11:14:08 crc kubenswrapper[4984]: I0130 11:14:08.503496 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z7vlt_997946ae-eb76-422f-9954-d9dae3ca8184/frr/0.log" Jan 30 11:14:08 crc kubenswrapper[4984]: I0130 11:14:08.569776 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wc8c7_07684256-0759-426a-9ba0-40514aa3e7ac/speaker/0.log" Jan 30 11:14:09 crc kubenswrapper[4984]: I0130 11:14:09.090847 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:09 crc kubenswrapper[4984]: E0130 11:14:09.091317 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:21 crc kubenswrapper[4984]: I0130 11:14:21.091144 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:21 crc kubenswrapper[4984]: E0130 11:14:21.092333 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.535897 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/util/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.626353 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/pull/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.629882 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/util/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.755898 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/pull/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.921517 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/util/0.log" Jan 30 11:14:23 crc kubenswrapper[4984]: I0130 11:14:23.923409 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/extract/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.031608 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc5gvpg_d796f450-1311-422f-9f63-324d0a624f15/pull/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.166938 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/util/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.360389 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/pull/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.408961 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/util/0.log" Jan 30 11:14:24 crc kubenswrapper[4984]: I0130 11:14:24.557410 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/pull/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.527077 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/extract/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.531177 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/util/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.532557 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713sk9n4_790867b3-e261-4564-a2d4-ffc041c3a090/pull/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.713219 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-utilities/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.913772 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-utilities/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.944825 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-content/0.log" Jan 30 11:14:25 crc kubenswrapper[4984]: I0130 11:14:25.959188 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.146556 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.182385 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.378393 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.541674 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.559378 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.639400 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.849375 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-content/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.856391 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/extract-utilities/0.log" Jan 30 11:14:26 crc kubenswrapper[4984]: I0130 11:14:26.873389 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zckjp_c47b45ee-75cf-4e33-bfde-721099cda0a9/registry-server/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.439028 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tttcx_ed0e4098-37d9-4094-99d0-1892881696ad/marketplace-operator/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.459165 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hn9gx_a725adac-ef1c-400b-bde2-756c97779906/registry-server/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.489412 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-utilities/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.624911 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-utilities/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.642511 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-content/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.662682 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-content/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.840101 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-content/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.885503 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/extract-utilities/0.log" Jan 30 11:14:27 crc kubenswrapper[4984]: I0130 11:14:27.905808 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-utilities/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.014223 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8prhf_719f7e0f-9e74-40fe-b2cb-a967e9e0ac4d/registry-server/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.059516 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-content/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.108094 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-utilities/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.132520 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-content/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.297475 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-utilities/0.log" Jan 30 11:14:28 crc kubenswrapper[4984]: I0130 11:14:28.317261 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/extract-content/0.log" Jan 30 11:14:29 crc kubenswrapper[4984]: I0130 11:14:29.108071 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-47j92_24af9dab-3f7a-4433-b367-5ecafcf89754/registry-server/0.log" Jan 30 11:14:35 crc kubenswrapper[4984]: I0130 11:14:35.090847 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:35 crc kubenswrapper[4984]: E0130 11:14:35.091690 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:14:50 crc kubenswrapper[4984]: I0130 11:14:50.091707 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:14:50 crc kubenswrapper[4984]: E0130 11:14:50.093022 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.167610 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv"] Jan 30 11:15:00 crc kubenswrapper[4984]: E0130 11:15:00.168933 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerName="container-00" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.168951 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerName="container-00" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.169207 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="71550211-cb32-4484-9ebf-6ea10af9bf54" containerName="container-00" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.169930 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.172245 4984 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.172985 4984 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.180359 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv"] Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.187838 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.188113 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.188184 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.290028 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.290112 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.290293 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.291132 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.295715 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.308852 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"collect-profiles-29496195-d4bvv\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:00 crc kubenswrapper[4984]: I0130 11:15:00.491927 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.008280 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv"] Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.224381 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerStarted","Data":"c2707a5bb73729166e32cd080c31f04f1da0df9767101d86be749adb56c4a63e"} Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.224639 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerStarted","Data":"d3fb0854f8cf173c0bad1b0d2314531b92035fc1f6b21c103ed37b14eb61932f"} Jan 30 11:15:01 crc kubenswrapper[4984]: I0130 11:15:01.240553 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" podStartSLOduration=1.240534683 podStartE2EDuration="1.240534683s" podCreationTimestamp="2026-01-30 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 11:15:01.236460443 +0000 UTC m=+3805.802764267" watchObservedRunningTime="2026-01-30 11:15:01.240534683 +0000 UTC m=+3805.806838507" Jan 30 11:15:02 crc kubenswrapper[4984]: I0130 11:15:02.090321 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:02 crc kubenswrapper[4984]: E0130 11:15:02.090672 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:02 crc kubenswrapper[4984]: I0130 11:15:02.235696 4984 generic.go:334] "Generic (PLEG): container finished" podID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerID="c2707a5bb73729166e32cd080c31f04f1da0df9767101d86be749adb56c4a63e" exitCode=0 Jan 30 11:15:02 crc kubenswrapper[4984]: I0130 11:15:02.236579 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerDied","Data":"c2707a5bb73729166e32cd080c31f04f1da0df9767101d86be749adb56c4a63e"} Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.665613 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.761600 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") pod \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.761879 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") pod \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.761988 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") pod \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\" (UID: \"411c5cf2-35bd-4df8-afbd-117cc0c2e785\") " Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.762594 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume" (OuterVolumeSpecName: "config-volume") pod "411c5cf2-35bd-4df8-afbd-117cc0c2e785" (UID: "411c5cf2-35bd-4df8-afbd-117cc0c2e785"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.768971 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n" (OuterVolumeSpecName: "kube-api-access-k6d4n") pod "411c5cf2-35bd-4df8-afbd-117cc0c2e785" (UID: "411c5cf2-35bd-4df8-afbd-117cc0c2e785"). InnerVolumeSpecName "kube-api-access-k6d4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.777354 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "411c5cf2-35bd-4df8-afbd-117cc0c2e785" (UID: "411c5cf2-35bd-4df8-afbd-117cc0c2e785"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.864366 4984 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/411c5cf2-35bd-4df8-afbd-117cc0c2e785-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.864405 4984 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/411c5cf2-35bd-4df8-afbd-117cc0c2e785-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 11:15:03 crc kubenswrapper[4984]: I0130 11:15:03.864430 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6d4n\" (UniqueName: \"kubernetes.io/projected/411c5cf2-35bd-4df8-afbd-117cc0c2e785-kube-api-access-k6d4n\") on node \"crc\" DevicePath \"\"" Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.258305 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" event={"ID":"411c5cf2-35bd-4df8-afbd-117cc0c2e785","Type":"ContainerDied","Data":"d3fb0854f8cf173c0bad1b0d2314531b92035fc1f6b21c103ed37b14eb61932f"} Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.258737 4984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3fb0854f8cf173c0bad1b0d2314531b92035fc1f6b21c103ed37b14eb61932f" Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.258802 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496195-d4bvv" Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.352931 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 11:15:04 crc kubenswrapper[4984]: I0130 11:15:04.360936 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496150-kfrjt"] Jan 30 11:15:06 crc kubenswrapper[4984]: I0130 11:15:06.108230 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c13999b-7269-403d-8be6-78d42f65f26c" path="/var/lib/kubelet/pods/5c13999b-7269-403d-8be6-78d42f65f26c/volumes" Jan 30 11:15:17 crc kubenswrapper[4984]: I0130 11:15:17.091158 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:17 crc kubenswrapper[4984]: E0130 11:15:17.092483 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:24 crc kubenswrapper[4984]: I0130 11:15:24.587616 4984 scope.go:117] "RemoveContainer" containerID="673987907c6890a3da91b3b133a9ad126ca5110425aedf8c5b019ce181470176" Jan 30 11:15:28 crc kubenswrapper[4984]: I0130 11:15:28.091158 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:28 crc kubenswrapper[4984]: E0130 11:15:28.091831 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:41 crc kubenswrapper[4984]: I0130 11:15:41.090660 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:41 crc kubenswrapper[4984]: E0130 11:15:41.091659 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:15:56 crc kubenswrapper[4984]: I0130 11:15:56.098558 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:15:56 crc kubenswrapper[4984]: E0130 11:15:56.099348 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:11 crc kubenswrapper[4984]: I0130 11:16:11.090069 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:11 crc kubenswrapper[4984]: E0130 11:16:11.090987 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:12 crc kubenswrapper[4984]: I0130 11:16:12.106581 4984 generic.go:334] "Generic (PLEG): container finished" podID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" exitCode=0 Jan 30 11:16:12 crc kubenswrapper[4984]: I0130 11:16:12.106843 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fkd9b/must-gather-clm44" event={"ID":"5d446618-ad2a-4a27-a8f6-6afe185631c9","Type":"ContainerDied","Data":"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6"} Jan 30 11:16:12 crc kubenswrapper[4984]: I0130 11:16:12.107553 4984 scope.go:117] "RemoveContainer" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:13 crc kubenswrapper[4984]: I0130 11:16:13.104568 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fkd9b_must-gather-clm44_5d446618-ad2a-4a27-a8f6-6afe185631c9/gather/0.log" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.101135 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.102088 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fkd9b/must-gather-clm44" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" containerID="cri-o://ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" gracePeriod=2 Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.116962 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fkd9b/must-gather-clm44"] Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.581337 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fkd9b_must-gather-clm44_5d446618-ad2a-4a27-a8f6-6afe185631c9/copy/0.log" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.582056 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.657406 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") pod \"5d446618-ad2a-4a27-a8f6-6afe185631c9\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.657611 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") pod \"5d446618-ad2a-4a27-a8f6-6afe185631c9\" (UID: \"5d446618-ad2a-4a27-a8f6-6afe185631c9\") " Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.662861 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx" (OuterVolumeSpecName: "kube-api-access-r75gx") pod "5d446618-ad2a-4a27-a8f6-6afe185631c9" (UID: "5d446618-ad2a-4a27-a8f6-6afe185631c9"). InnerVolumeSpecName "kube-api-access-r75gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.760022 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r75gx\" (UniqueName: \"kubernetes.io/projected/5d446618-ad2a-4a27-a8f6-6afe185631c9-kube-api-access-r75gx\") on node \"crc\" DevicePath \"\"" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.795934 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5d446618-ad2a-4a27-a8f6-6afe185631c9" (UID: "5d446618-ad2a-4a27-a8f6-6afe185631c9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:16:21 crc kubenswrapper[4984]: I0130 11:16:21.861651 4984 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d446618-ad2a-4a27-a8f6-6afe185631c9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.107552 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" path="/var/lib/kubelet/pods/5d446618-ad2a-4a27-a8f6-6afe185631c9/volumes" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223472 4984 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fkd9b_must-gather-clm44_5d446618-ad2a-4a27-a8f6-6afe185631c9/copy/0.log" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223917 4984 generic.go:334] "Generic (PLEG): container finished" podID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" exitCode=143 Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223980 4984 scope.go:117] "RemoveContainer" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.223996 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fkd9b/must-gather-clm44" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.245033 4984 scope.go:117] "RemoveContainer" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.319487 4984 scope.go:117] "RemoveContainer" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" Jan 30 11:16:22 crc kubenswrapper[4984]: E0130 11:16:22.320647 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad\": container with ID starting with ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad not found: ID does not exist" containerID="ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.320709 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad"} err="failed to get container status \"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad\": rpc error: code = NotFound desc = could not find container \"ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad\": container with ID starting with ed0c6afe69aa8c987766156c82c3e7283eac9d6604410a43d6cd6c1a61d297ad not found: ID does not exist" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.320744 4984 scope.go:117] "RemoveContainer" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:22 crc kubenswrapper[4984]: E0130 11:16:22.324519 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6\": container with ID starting with dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6 not found: ID does not exist" containerID="dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6" Jan 30 11:16:22 crc kubenswrapper[4984]: I0130 11:16:22.324587 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6"} err="failed to get container status \"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6\": rpc error: code = NotFound desc = could not find container \"dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6\": container with ID starting with dbd902c3fc424f8d7e21106bfbd46395f967efbd1e5d1bbb017638b4aa9e03c6 not found: ID does not exist" Jan 30 11:16:24 crc kubenswrapper[4984]: I0130 11:16:24.090541 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:24 crc kubenswrapper[4984]: E0130 11:16:24.091388 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:39 crc kubenswrapper[4984]: I0130 11:16:39.091024 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:39 crc kubenswrapper[4984]: E0130 11:16:39.092089 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:16:53 crc kubenswrapper[4984]: I0130 11:16:53.091290 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:16:53 crc kubenswrapper[4984]: E0130 11:16:53.092242 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:17:07 crc kubenswrapper[4984]: I0130 11:17:07.091962 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:17:07 crc kubenswrapper[4984]: E0130 11:17:07.096517 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:17:20 crc kubenswrapper[4984]: I0130 11:17:20.091238 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:17:20 crc kubenswrapper[4984]: E0130 11:17:20.093857 4984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m4gnh_openshift-machine-config-operator(6c1bd910-b683-42bf-966f-51a04ac18bd2)\"" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" Jan 30 11:17:24 crc kubenswrapper[4984]: I0130 11:17:24.723441 4984 scope.go:117] "RemoveContainer" containerID="2e299e2d8be015f9e4c1acc3aab498b2e7d851fdde1fd71f21478452b5b784f0" Jan 30 11:17:35 crc kubenswrapper[4984]: I0130 11:17:35.091573 4984 scope.go:117] "RemoveContainer" containerID="fcbbcec9aa4e7113d6a1684aa213066d8866b3971182fa07f850b2a1f2550aed" Jan 30 11:17:36 crc kubenswrapper[4984]: I0130 11:17:36.026939 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" event={"ID":"6c1bd910-b683-42bf-966f-51a04ac18bd2","Type":"ContainerStarted","Data":"bd836a9bd18698fedb1d2813808a345c2079c53385cfda390de4be0312d43024"} Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.696339 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:17:55 crc kubenswrapper[4984]: E0130 11:17:55.697370 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697474 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" Jan 30 11:17:55 crc kubenswrapper[4984]: E0130 11:17:55.697512 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerName="collect-profiles" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697520 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerName="collect-profiles" Jan 30 11:17:55 crc kubenswrapper[4984]: E0130 11:17:55.697542 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="gather" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697550 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="gather" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697761 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="gather" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697773 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d446618-ad2a-4a27-a8f6-6afe185631c9" containerName="copy" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.697787 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="411c5cf2-35bd-4df8-afbd-117cc0c2e785" containerName="collect-profiles" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.699692 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.713089 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.830791 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.830965 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.831075 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.932965 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933067 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933118 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933871 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.933910 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:55 crc kubenswrapper[4984]: I0130 11:17:55.958696 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"certified-operators-24tpt\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:56 crc kubenswrapper[4984]: I0130 11:17:56.034662 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:17:56 crc kubenswrapper[4984]: I0130 11:17:56.524785 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.273133 4984 generic.go:334] "Generic (PLEG): container finished" podID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" exitCode=0 Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.273206 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79"} Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.273271 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerStarted","Data":"5f1a932494ac9a3424b4a534f629ccdc1759c36a76486ae46608fece3886a242"} Jan 30 11:17:57 crc kubenswrapper[4984]: I0130 11:17:57.276225 4984 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 11:17:59 crc kubenswrapper[4984]: I0130 11:17:59.303121 4984 generic.go:334] "Generic (PLEG): container finished" podID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" exitCode=0 Jan 30 11:17:59 crc kubenswrapper[4984]: I0130 11:17:59.303202 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98"} Jan 30 11:18:01 crc kubenswrapper[4984]: I0130 11:18:01.327649 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerStarted","Data":"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7"} Jan 30 11:18:01 crc kubenswrapper[4984]: I0130 11:18:01.351875 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-24tpt" podStartSLOduration=3.257712415 podStartE2EDuration="6.351850431s" podCreationTimestamp="2026-01-30 11:17:55 +0000 UTC" firstStartedPulling="2026-01-30 11:17:57.27589674 +0000 UTC m=+3981.842200574" lastFinishedPulling="2026-01-30 11:18:00.370034736 +0000 UTC m=+3984.936338590" observedRunningTime="2026-01-30 11:18:01.350039312 +0000 UTC m=+3985.916343186" watchObservedRunningTime="2026-01-30 11:18:01.351850431 +0000 UTC m=+3985.918154275" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.035002 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.035994 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.108692 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.434543 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:06 crc kubenswrapper[4984]: I0130 11:18:06.501703 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:18:08 crc kubenswrapper[4984]: I0130 11:18:08.407466 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-24tpt" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" containerID="cri-o://29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" gracePeriod=2 Jan 30 11:18:08 crc kubenswrapper[4984]: I0130 11:18:08.935834 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.043307 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") pod \"ef041e51-918d-41ef-ac7b-d2ab23b45757\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.043355 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") pod \"ef041e51-918d-41ef-ac7b-d2ab23b45757\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.043530 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") pod \"ef041e51-918d-41ef-ac7b-d2ab23b45757\" (UID: \"ef041e51-918d-41ef-ac7b-d2ab23b45757\") " Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.044462 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities" (OuterVolumeSpecName: "utilities") pod "ef041e51-918d-41ef-ac7b-d2ab23b45757" (UID: "ef041e51-918d-41ef-ac7b-d2ab23b45757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.053544 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn" (OuterVolumeSpecName: "kube-api-access-r4sxn") pod "ef041e51-918d-41ef-ac7b-d2ab23b45757" (UID: "ef041e51-918d-41ef-ac7b-d2ab23b45757"). InnerVolumeSpecName "kube-api-access-r4sxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.146368 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4sxn\" (UniqueName: \"kubernetes.io/projected/ef041e51-918d-41ef-ac7b-d2ab23b45757-kube-api-access-r4sxn\") on node \"crc\" DevicePath \"\"" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.146396 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423289 4984 generic.go:334] "Generic (PLEG): container finished" podID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" exitCode=0 Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423355 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7"} Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423396 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24tpt" event={"ID":"ef041e51-918d-41ef-ac7b-d2ab23b45757","Type":"ContainerDied","Data":"5f1a932494ac9a3424b4a534f629ccdc1759c36a76486ae46608fece3886a242"} Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423429 4984 scope.go:117] "RemoveContainer" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.423453 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24tpt" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.461534 4984 scope.go:117] "RemoveContainer" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.489987 4984 scope.go:117] "RemoveContainer" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.581665 4984 scope.go:117] "RemoveContainer" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" Jan 30 11:18:09 crc kubenswrapper[4984]: E0130 11:18:09.582290 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7\": container with ID starting with 29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7 not found: ID does not exist" containerID="29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582343 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7"} err="failed to get container status \"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7\": rpc error: code = NotFound desc = could not find container \"29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7\": container with ID starting with 29d7bb71af54bd30d85dcf2aede7dfc07f08ecbbac3ff74972a8bdc969b00ef7 not found: ID does not exist" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582373 4984 scope.go:117] "RemoveContainer" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" Jan 30 11:18:09 crc kubenswrapper[4984]: E0130 11:18:09.582829 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98\": container with ID starting with e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98 not found: ID does not exist" containerID="e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582884 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98"} err="failed to get container status \"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98\": rpc error: code = NotFound desc = could not find container \"e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98\": container with ID starting with e441ddcc62c3e6417a7ff053a499c8b16f8b69fdc1dc9c70ed7da29832370c98 not found: ID does not exist" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.582918 4984 scope.go:117] "RemoveContainer" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" Jan 30 11:18:09 crc kubenswrapper[4984]: E0130 11:18:09.583321 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79\": container with ID starting with 0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79 not found: ID does not exist" containerID="0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79" Jan 30 11:18:09 crc kubenswrapper[4984]: I0130 11:18:09.583364 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79"} err="failed to get container status \"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79\": rpc error: code = NotFound desc = could not find container \"0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79\": container with ID starting with 0682a004c163176097a8ca2ce32340e14e1fe5c18b0156d89f56b92e058f1e79 not found: ID does not exist" Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.093806 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef041e51-918d-41ef-ac7b-d2ab23b45757" (UID: "ef041e51-918d-41ef-ac7b-d2ab23b45757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.192819 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef041e51-918d-41ef-ac7b-d2ab23b45757-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.274292 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:18:11 crc kubenswrapper[4984]: I0130 11:18:11.291886 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-24tpt"] Jan 30 11:18:12 crc kubenswrapper[4984]: I0130 11:18:12.113071 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" path="/var/lib/kubelet/pods/ef041e51-918d-41ef-ac7b-d2ab23b45757/volumes" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.416395 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:19:50 crc kubenswrapper[4984]: E0130 11:19:50.417755 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.417773 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" Jan 30 11:19:50 crc kubenswrapper[4984]: E0130 11:19:50.417797 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-content" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.417815 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-content" Jan 30 11:19:50 crc kubenswrapper[4984]: E0130 11:19:50.417847 4984 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-utilities" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.417856 4984 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="extract-utilities" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.418139 4984 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef041e51-918d-41ef-ac7b-d2ab23b45757" containerName="registry-server" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.420011 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.427061 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.557536 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.557596 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.557631 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.659843 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.659930 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.659989 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.660758 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.660770 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.694464 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"redhat-marketplace-5pk6x\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:50 crc kubenswrapper[4984]: I0130 11:19:50.753066 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.219887 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:19:51 crc kubenswrapper[4984]: W0130 11:19:51.228690 4984 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bde2870_f8fa_4a9d_89dc_5882c25fe044.slice/crio-4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5 WatchSource:0}: Error finding container 4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5: Status 404 returned error can't find the container with id 4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5 Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.579754 4984 generic.go:334] "Generic (PLEG): container finished" podID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" exitCode=0 Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.579963 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e"} Jan 30 11:19:51 crc kubenswrapper[4984]: I0130 11:19:51.580151 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerStarted","Data":"4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5"} Jan 30 11:19:52 crc kubenswrapper[4984]: I0130 11:19:52.595084 4984 generic.go:334] "Generic (PLEG): container finished" podID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" exitCode=0 Jan 30 11:19:52 crc kubenswrapper[4984]: I0130 11:19:52.595312 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765"} Jan 30 11:19:53 crc kubenswrapper[4984]: I0130 11:19:53.608416 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerStarted","Data":"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96"} Jan 30 11:19:53 crc kubenswrapper[4984]: I0130 11:19:53.629552 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5pk6x" podStartSLOduration=2.203899676 podStartE2EDuration="3.629532577s" podCreationTimestamp="2026-01-30 11:19:50 +0000 UTC" firstStartedPulling="2026-01-30 11:19:51.582416502 +0000 UTC m=+4096.148720336" lastFinishedPulling="2026-01-30 11:19:53.008049383 +0000 UTC m=+4097.574353237" observedRunningTime="2026-01-30 11:19:53.627165174 +0000 UTC m=+4098.193468998" watchObservedRunningTime="2026-01-30 11:19:53.629532577 +0000 UTC m=+4098.195836421" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.134269 4984 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.149940 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.184562 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.205198 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.205269 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.205454 4984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308038 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308103 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308815 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.308918 4984 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.309272 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.349988 4984 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"community-operators-nzfnw\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:57 crc kubenswrapper[4984]: I0130 11:19:57.502537 4984 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:19:58 crc kubenswrapper[4984]: I0130 11:19:58.144631 4984 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:19:58 crc kubenswrapper[4984]: I0130 11:19:58.665861 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerStarted","Data":"b44973996cc05eb772cf42729fb3a64f9424ddbfb8fd891eced50913f8c45eac"} Jan 30 11:19:59 crc kubenswrapper[4984]: I0130 11:19:59.676566 4984 generic.go:334] "Generic (PLEG): container finished" podID="c429fec3-b80b-42aa-8488-74b853752056" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" exitCode=0 Jan 30 11:19:59 crc kubenswrapper[4984]: I0130 11:19:59.676950 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b"} Jan 30 11:20:00 crc kubenswrapper[4984]: I0130 11:20:00.753583 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:00 crc kubenswrapper[4984]: I0130 11:20:00.755507 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.052438 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.728563 4984 generic.go:334] "Generic (PLEG): container finished" podID="c429fec3-b80b-42aa-8488-74b853752056" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" exitCode=0 Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.735936 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243"} Jan 30 11:20:01 crc kubenswrapper[4984]: I0130 11:20:01.873518 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.001470 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.002112 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.751641 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerStarted","Data":"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c"} Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.788396 4984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nzfnw" podStartSLOduration=4.011580263 podStartE2EDuration="6.788370293s" podCreationTimestamp="2026-01-30 11:19:57 +0000 UTC" firstStartedPulling="2026-01-30 11:19:59.680141494 +0000 UTC m=+4104.246445328" lastFinishedPulling="2026-01-30 11:20:02.456931514 +0000 UTC m=+4107.023235358" observedRunningTime="2026-01-30 11:20:03.77300756 +0000 UTC m=+4108.339311384" watchObservedRunningTime="2026-01-30 11:20:03.788370293 +0000 UTC m=+4108.354674137" Jan 30 11:20:03 crc kubenswrapper[4984]: I0130 11:20:03.969494 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:20:04 crc kubenswrapper[4984]: I0130 11:20:04.757512 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5pk6x" podUID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerName="registry-server" containerID="cri-o://8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" gracePeriod=2 Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.729415 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768486 4984 generic.go:334] "Generic (PLEG): container finished" podID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" exitCode=0 Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768521 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96"} Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768542 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pk6x" event={"ID":"6bde2870-f8fa-4a9d-89dc-5882c25fe044","Type":"ContainerDied","Data":"4a71e09b6799af350a9e33a6ddaf40742948215601817207b86b787bf838caa5"} Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768558 4984 scope.go:117] "RemoveContainer" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.768581 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pk6x" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.795565 4984 scope.go:117] "RemoveContainer" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.801916 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") pod \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.802625 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") pod \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.803603 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") pod \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\" (UID: \"6bde2870-f8fa-4a9d-89dc-5882c25fe044\") " Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.804319 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities" (OuterVolumeSpecName: "utilities") pod "6bde2870-f8fa-4a9d-89dc-5882c25fe044" (UID: "6bde2870-f8fa-4a9d-89dc-5882c25fe044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.804601 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.809964 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2" (OuterVolumeSpecName: "kube-api-access-7l8l2") pod "6bde2870-f8fa-4a9d-89dc-5882c25fe044" (UID: "6bde2870-f8fa-4a9d-89dc-5882c25fe044"). InnerVolumeSpecName "kube-api-access-7l8l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.841327 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bde2870-f8fa-4a9d-89dc-5882c25fe044" (UID: "6bde2870-f8fa-4a9d-89dc-5882c25fe044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.862523 4984 scope.go:117] "RemoveContainer" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.906585 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l8l2\" (UniqueName: \"kubernetes.io/projected/6bde2870-f8fa-4a9d-89dc-5882c25fe044-kube-api-access-7l8l2\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.906623 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bde2870-f8fa-4a9d-89dc-5882c25fe044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.918676 4984 scope.go:117] "RemoveContainer" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" Jan 30 11:20:05 crc kubenswrapper[4984]: E0130 11:20:05.919452 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96\": container with ID starting with 8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96 not found: ID does not exist" containerID="8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.919520 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96"} err="failed to get container status \"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96\": rpc error: code = NotFound desc = could not find container \"8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96\": container with ID starting with 8ebd50d752dc9ac008e910afc6d3da225048214af5ed80443217f03dd9542f96 not found: ID does not exist" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.919555 4984 scope.go:117] "RemoveContainer" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" Jan 30 11:20:05 crc kubenswrapper[4984]: E0130 11:20:05.920001 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765\": container with ID starting with 1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765 not found: ID does not exist" containerID="1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.920040 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765"} err="failed to get container status \"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765\": rpc error: code = NotFound desc = could not find container \"1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765\": container with ID starting with 1aa67704e09b621451f22d4c79af7f60c774ec25ed8152a60d4261e34fec5765 not found: ID does not exist" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.920065 4984 scope.go:117] "RemoveContainer" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" Jan 30 11:20:05 crc kubenswrapper[4984]: E0130 11:20:05.920322 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e\": container with ID starting with 3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e not found: ID does not exist" containerID="3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e" Jan 30 11:20:05 crc kubenswrapper[4984]: I0130 11:20:05.920359 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e"} err="failed to get container status \"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e\": rpc error: code = NotFound desc = could not find container \"3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e\": container with ID starting with 3abeaa82f8d7e48ad4884045daaa3326de078408304797854c54c5a4cda9116e not found: ID does not exist" Jan 30 11:20:06 crc kubenswrapper[4984]: I0130 11:20:06.114006 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:20:06 crc kubenswrapper[4984]: I0130 11:20:06.114066 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pk6x"] Jan 30 11:20:06 crc kubenswrapper[4984]: E0130 11:20:06.134752 4984 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bde2870_f8fa_4a9d_89dc_5882c25fe044.slice\": RecentStats: unable to find data in memory cache]" Jan 30 11:20:07 crc kubenswrapper[4984]: I0130 11:20:07.503388 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:07 crc kubenswrapper[4984]: I0130 11:20:07.504099 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:07 crc kubenswrapper[4984]: I0130 11:20:07.586021 4984 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:08 crc kubenswrapper[4984]: I0130 11:20:08.105065 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bde2870-f8fa-4a9d-89dc-5882c25fe044" path="/var/lib/kubelet/pods/6bde2870-f8fa-4a9d-89dc-5882c25fe044/volumes" Jan 30 11:20:17 crc kubenswrapper[4984]: I0130 11:20:17.574520 4984 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:17 crc kubenswrapper[4984]: I0130 11:20:17.633983 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:20:17 crc kubenswrapper[4984]: I0130 11:20:17.893095 4984 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nzfnw" podUID="c429fec3-b80b-42aa-8488-74b853752056" containerName="registry-server" containerID="cri-o://d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" gracePeriod=2 Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.300194 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.376141 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") pod \"c429fec3-b80b-42aa-8488-74b853752056\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.376348 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") pod \"c429fec3-b80b-42aa-8488-74b853752056\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.376389 4984 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") pod \"c429fec3-b80b-42aa-8488-74b853752056\" (UID: \"c429fec3-b80b-42aa-8488-74b853752056\") " Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.377349 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities" (OuterVolumeSpecName: "utilities") pod "c429fec3-b80b-42aa-8488-74b853752056" (UID: "c429fec3-b80b-42aa-8488-74b853752056"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.382468 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp" (OuterVolumeSpecName: "kube-api-access-fppfp") pod "c429fec3-b80b-42aa-8488-74b853752056" (UID: "c429fec3-b80b-42aa-8488-74b853752056"). InnerVolumeSpecName "kube-api-access-fppfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.436303 4984 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c429fec3-b80b-42aa-8488-74b853752056" (UID: "c429fec3-b80b-42aa-8488-74b853752056"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.478665 4984 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.478937 4984 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c429fec3-b80b-42aa-8488-74b853752056-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.479051 4984 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fppfp\" (UniqueName: \"kubernetes.io/projected/c429fec3-b80b-42aa-8488-74b853752056-kube-api-access-fppfp\") on node \"crc\" DevicePath \"\"" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904144 4984 generic.go:334] "Generic (PLEG): container finished" podID="c429fec3-b80b-42aa-8488-74b853752056" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" exitCode=0 Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904205 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c"} Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904238 4984 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzfnw" event={"ID":"c429fec3-b80b-42aa-8488-74b853752056","Type":"ContainerDied","Data":"b44973996cc05eb772cf42729fb3a64f9424ddbfb8fd891eced50913f8c45eac"} Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.904287 4984 scope.go:117] "RemoveContainer" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.905630 4984 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzfnw" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.931649 4984 scope.go:117] "RemoveContainer" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.954573 4984 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.963066 4984 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nzfnw"] Jan 30 11:20:18 crc kubenswrapper[4984]: I0130 11:20:18.972520 4984 scope.go:117] "RemoveContainer" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.007798 4984 scope.go:117] "RemoveContainer" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" Jan 30 11:20:19 crc kubenswrapper[4984]: E0130 11:20:19.011198 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c\": container with ID starting with d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c not found: ID does not exist" containerID="d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.011265 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c"} err="failed to get container status \"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c\": rpc error: code = NotFound desc = could not find container \"d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c\": container with ID starting with d289279e69722f958d015abedb2b26d629277e0c1e16267118d2be337918ad4c not found: ID does not exist" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.011296 4984 scope.go:117] "RemoveContainer" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" Jan 30 11:20:19 crc kubenswrapper[4984]: E0130 11:20:19.014363 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243\": container with ID starting with e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243 not found: ID does not exist" containerID="e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.014387 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243"} err="failed to get container status \"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243\": rpc error: code = NotFound desc = could not find container \"e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243\": container with ID starting with e9055e788532fde0de3fb1f69ec5c953708b6c89818d39c7ba7712973d0ca243 not found: ID does not exist" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.014407 4984 scope.go:117] "RemoveContainer" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" Jan 30 11:20:19 crc kubenswrapper[4984]: E0130 11:20:19.014821 4984 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b\": container with ID starting with ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b not found: ID does not exist" containerID="ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b" Jan 30 11:20:19 crc kubenswrapper[4984]: I0130 11:20:19.014853 4984 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b"} err="failed to get container status \"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b\": rpc error: code = NotFound desc = could not find container \"ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b\": container with ID starting with ebe0f07815ec71ab60fd19f95653736ef51218c2580c450bbc5278a3128b7e2b not found: ID does not exist" Jan 30 11:20:20 crc kubenswrapper[4984]: I0130 11:20:20.107528 4984 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c429fec3-b80b-42aa-8488-74b853752056" path="/var/lib/kubelet/pods/c429fec3-b80b-42aa-8488-74b853752056/volumes" Jan 30 11:20:33 crc kubenswrapper[4984]: I0130 11:20:33.001332 4984 patch_prober.go:28] interesting pod/machine-config-daemon-m4gnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 11:20:33 crc kubenswrapper[4984]: I0130 11:20:33.003564 4984 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m4gnh" podUID="6c1bd910-b683-42bf-966f-51a04ac18bd2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137112037024445 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137112037017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137101422016501 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137101422015451 5ustar corecore